Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. FAR AI — General Support

    Award Date:
    12/2022
    Amount:
    $625,000
    Potential Risks from Advanced AI
  2. FAR AI — Inverse Scaling Prize

    Award Date:
    12/2022
    Amount:
    $49,500
    Potential Risks from Advanced AI
  3. FAR AI — Interpretability Research

    Award Date:
    12/2022
    Amount:
    $50,000
    Potential Risks from Advanced AI
  4. Georgetown University — Policy Fellowship (2022)

    Award Date:
    12/2022
    Amount:
    $239,061
    Potential Risks from Advanced AI
  5. Northeastern University — Large Language Model Interpretability Research

    Award Date:
    11/2022
    Amount:
    $562,128
    Potential Risks from Advanced AI
  6. AI Safety Support — SERI MATS Program

    Award Date:
    11/2022
    Amount:
    $1,538,000
    Potential Risks from Advanced AI
  7. Alignment Research Center — General Support (November 2022)

    Award Date:
    11/2022
    Amount:
    $1,250,000
    Potential Risks from Advanced AI
  8. Center for AI Safety — General Support

    Award Date:
    11/2022
    Amount:
    $5,160,000
    Potential Risks from Advanced AI
  9. Berkeley Existential Risk Initiative — Machine Learning Alignment Theory Scholars

    Award Date:
    11/2022
    Amount:
    $2,047,268
    Potential Risks from Advanced AI
  10. Berkeley Existential Risk Initiative — General Support (2022)

    Award Date:
    11/2022
    Amount:
    $100,000
    Potential Risks from Advanced AI