Showing grants tagged "Potential Risks from Advanced AI"

We’re open to supporting safe bets, like direct cash transfers to the world’s poorest people, as well as high-risk, high-reward projects, like minimizing risks from potentially world-changing science and technology. Read more about how we choose what to fund here.

  1. Stanford University — AI Economic Impacts Workshop

    Award Date:
    11/2023
    Amount:
    $120,000
    Potential Risks from Advanced AI
  2. Eleuther AI — Interpretability Research

    Award Date:
    11/2023
    Amount:
    $2,642,273
    Potential Risks from Advanced AI
  3. London Initiative for Safe AI (LISA) — General Support

    Award Date:
    11/2023
    Amount:
    $237,000
    Potential Risks from Advanced AI
  4. Berkeley Existential Risk Initiative — University Collaboration Program

    Award Date:
    10/2023
    Amount:
    $70,000
    Potential Risks from Advanced AI
  5. CAIS — Exit Grant (Oct 2023)

    Award Date:
    10/2023
    Amount:
    $1,866,559
    Potential Risks from Advanced AI
  6. RAND Corporation — Emerging Technology Initiatives

    Award Date:
    10/2023
    Amount:
    $10,500,000
    Potential Risks from Advanced AI
  7. Northeastern University — Mechanistic Interpretability Research

    Award Date:
    09/2023
    Amount:
    $116,072
    Potential Risks from Advanced AI
  8. OpenMined — Software for AI Audits

    Award Date:
    09/2023
    Amount:
    $6,000,000
    Potential Risks from Advanced AI
  9. FAR AI — Alignment Workshop

    Award Date:
    09/2023
    Amount:
    $166,500
    Potential Risks from Advanced AI
  10. Matthew Kenney – Language Model Capabilities Benchmarking

    Award Date:
    09/2023
    Amount:
    $397,350
    Potential Risks from Advanced AI