We want to maximize the impact of our portfolio.

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Alignment Research Engineer Accelerator — AI Safety Technical Program (2024)

    Award Date:
    07/2024
    Amount:
    $318,272
    Global Catastrophic Risks
  2. New York University — LLM Cybersecurity Benchmark

    Award Date:
    07/2024
    Amount:
    $2,077,350
    Potential Risks from Advanced AI
  3. WITS Health Consortium — Tuberculosis Vaccine Preclinical Development and Testing

    Award Date:
    07/2024
    Amount:
    $696,742
    Scientific Research
  4. La Trobe University — Pediatric Sepsis Treatment

    Award Date:
    07/2024
    Amount:
    $914,522
    Scientific Research
  5. Anhimalia — Cage-free Campaigns in Argentina

    Award Date:
    07/2024
    Amount:
    $96,000
    Farm Animal Welfare
  6. Stanford University — LLM Cybersecurity Benchmark

    Award Date:
    07/2024
    Amount:
    $2,937,000
    Potential Risks from Advanced AI
  7. Erasmus MC, University Medical Center Rotterdam — Emodepside Disease Modeling

    Award Date:
    07/2024
    Amount:
    $220,158
    Global Health R&D
  8. Trustees of Boston University — LLM Research Benchmark

    Award Date:
    07/2024
    Amount:
    $756,396
    Potential Risks from Advanced AI
  9. Centre for International Governance Innovation — Global AI Risks Initiative

    Award Date:
    07/2024
    Amount:
    $300,000
    Potential Risks from Advanced AI
  10. University of Washington — Syphilis Vaccine Development (Lorenzo Giacani) (2024)

    Award Date:
    06/2024
    Amount:
    $1,025,884
    Scientific Research