PauseAI
FormationApril 2023; 1 year ago (2023-04)
FounderJoep Meindertsma
Founded atUtrecht, Netherlands
TypeAdvocacy group, Nonprofit
PurposeMitigating the existential risk from artificial general intelligence and other risks of advanced artificial intelligence
Region
International
Websitepauseai.info

PauseAI is a global political movement founded in the Netherlands with the stated aim of achieving global coordination to stop the development of artificial intelligence systems more powerful than GPT-4, at least until it is known how to build them safely.[1] The movement was established in Utrecht in April 2023 by tech CEO Joep Meindertsma.[2]

Citing the control problem and the alignment problem, PauseAI is concerned about the existential risk from artificial general intelligence.[3]

Its first public action was to protest in front of Microsoft's Brussels lobbying office in May 2023 whilst they were holding an event on artificial intelligence.[4] Later, on the 20th of October 2023, they met in Parliament Square Gardens to call for a "global moratorium" before the AI Safety Summit at Bletchley Park the following month.[5]

Background edit

In March 2023, the Future of Life Institute released a letter calling for "AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4", and it has since received over 30,000 signatures.[6] Signatories included AI researchers such as Yoshua Bengio[7] and Stuart Russell[8], along with tech entrepreneurs like Elon Musk and Jaan Tallinn.[9] The lead researcher at the Machine Intelligence Research Institute, Eliezer Yudkowsky, did not sign the letter, stating "I refrained from signing because I think the letter is understating the seriousness of the situation and asking for too little to solve it."[10]

Proposal edit

PauseAI's stated goal is to "implement a pause on the training of AI systems more powerful than GPT-4". Their website lists some proposed steps to achieve this goal:[11]

  • Set up an international AI safety agency, similar to the IAEA. This agency will be responsible for:
    • Granting approval for deployments. This will include red-teaming / model evaluations.
    • Granting approval for new training runs of AI models above a certain size (e.g. 1 billion parameters).
    • Periodic meetings to discuss the progress of AI safety research.
  • Only allow training of general AI systems more powerful than GPT-4 if their safety can be guaranteed.
    • By more powerful than GPT-4, we mean all AI models that are either 1) larger than 10^12 parameters, 2) have more than 10^25 FLOPs used for training or 3) capabilities that are expected to exceed GPT-4.
    • Note that this does not target narrow AI systems, like image recognition used for diagnosing cancer.
    • Require oversight during training runs.
    • Safety can be guaranteed if there is strong scientific consensus and proof that the alignment problem has been solved. Right now, this is not the case, so right now we should not allow training of such systems.
    • It may be possible that the AI alignment problem is never solved - it may be unsolvable. In that case, we should never allow training of such systems.
    • Even if we can build controllable, safe AI, only build and deploy such technology with strong democratic control. A superintelligence is too powerful to be controlled by a single company or country.
    • Track the sales of GPUs and other hardware that can be used for AI training.
  • Only allow deployment of models after no dangerous capabilities are present.
    • We will need standards and independent red-teaming to determine whether a model has dangerous capabilities.
    • The list of dangerous capabilities may change over time as AI capabilities grow.
    • Note that fully relying on model evaluations is not enough.

Organization edit

History edit

Support and funding edit

Media Coverage edit

See also edit

References edit

  1. ^ "PauseAI Proposal". PauseAI. Retrieved 2024-05-02.
  2. ^ Meaker, Morgan. "Meet the AI Protest Group Campaigning Against Human Extinction". Wired. ISSN 1059-1028. Retrieved 2024-04-30.
  3. ^ "The existential risk of superintelligent AI". PauseAI. Retrieved 2024-04-30.
  4. ^ "The rag-tag group trying to pause AI in Brussels". POLITICO. 2023-05-24. Retrieved 2024-04-30.
  5. ^ "New Pause AI demand for moratorium". Camden New Journal. Retrieved 2024-04-30.
  6. ^ "Pause Giant AI Experiments: An Open Letter". Future of Life Institute. Retrieved 2024-04-30.
  7. ^ Gill, Anne-Sophie (2023-04-05). "Statement from Yoshua Bengio after signing open letter on giant AI systems". Mila. Retrieved 2024-04-30.
  8. ^ "EXCLUSIVE: AI guru Prof Stuart Russell explains why he signed a letter with Elon Musk and others to pause AI development". Business Today. 2023-04-11. Retrieved 2024-04-30.
  9. ^ Knight, Will. "In Sudden Alarm, Tech Doyens Call for a Pause on ChatGPT". Wired. ISSN 1059-1028. Retrieved 2024-04-30.
  10. ^ "The Open Letter on AI Doesn't Go Far Enough". TIME. 2023-03-29. Retrieved 2024-04-30.
  11. ^ "PauseAI Proposal". PauseAI. Retrieved 2024-04-30.

External links edit