Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Arizona State University — Adversarial Robustness Research

    Award Date:
    08/2022
    Amount:
    $200,000
    Potential Risks from Advanced AI
  2. Redwood Research — General Support (2022)

    Award Date:
    08/2022
    Amount:
    $10,700,000
    Potential Risks from Advanced AI
  3. UW — Philosophy of AI Course Development

    Award Date:
    07/2022
    Amount:
    $16,500
    Potential Risks from Advanced AI
  4. CMU — Research on Adversarial Examples

    Award Date:
    07/2022
    Amount:
    $343,235
    Potential Risks from Advanced AI
  5. Center for a New American Security — Work on AI Governance

    Award Date:
    07/2022
    Amount:
    $5,149,398
    Potential Risks from Advanced AI
  6. Stanford University — AI Alignment Research (Barrett and Viteri)

    Award Date:
    07/2022
    Amount:
    $153,820
    Potential Risks from Advanced AI
  7. Berkeley Existential Risk Initiative — Language Model Alignment Research

    Award Date:
    06/2022
    Amount:
    $40,000
    Potential Risks from Advanced AI
  8. Century Fellowship — 2022 Cohort

    Award Date:
    06/2022
    Amount:
    $1,630,927
    Longtermism
  9. Epoch-General Support

    Award Date:
    06/2022
    Amount:
    $1,960,000
    Global Catastrophic Risks
  10. BERI — SERI MATS Program (2022)

    Award Date:
    06/2022
    Amount:
    $1,008,127
    Potential Risks from Advanced AI