We want to maximize the impact of our portfolio.

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Fred Hutchinson Cancer Center — CAR T-cell Cancer Treatment

    Award Date:
    11/2025
    Amount:
    $195,000
    Scientific Research
  2. Supervised Program for Alignment Research — Research Mentorship Program

    Award Date:
    08/2025
    Amount:
    $189,350
    Global Catastrophic Risks
  3. Harmony Intelligence — LLM Moneymaking Benchmark

    Award Date:
    06/2025
    Amount:
    $469,625
    Potential Risks from Advanced AI
  4. Alignment Research Engineer Accelerator — AI Safety Technical Program (2025)

    Award Date:
    01/2025
    Amount:
    $1,000,000
    Global Catastrophic Risks
  5. EA Funds — Operating Expenses (2025)

    Award Date:
    01/2025
    Amount:
    $602,000
    Global Catastrophic Risks
  6. Training for Good — Operating Costs and EU Tech Policy Fellowship

    Award Date:
    01/2025
    Amount:
    $461,069
    Potential Risks from Advanced AI
  7. Animal Welfare Observatory — Farm Animal Welfare Campaigns

    Award Date:
    01/2025
    Amount:
    $1,303,598
    Farm Animal Welfare
  8. Esade — Course Buyouts

    Award Date:
    12/2024
    Amount:
    $135,000
    Innovation Policy
  9. World Animal Protection — Farm Animal Welfare in Asia (2024)

    Award Date:
    12/2024
    Amount:
    $600,000
    Farm Animal Welfare
  10. Good Ancestors Policy — Global Catastrophic Risks Advocacy

    Award Date:
    12/2024
    Amount:
    $523,800
    Potential Risks from Advanced AI