Tres formas de entrar y una de salir
Mi Visor — Consulta tu energía del día en 1 minuto → Entrar
Mi Dashboard Púrpura — ¿Cómo estás realmente por dentro? → Descúbrelo

Is Superintelligence Safe for the Future of Humanity? We’ve published the Future of Life Institute’s statement signed by world-renowned figures… and we’ve also published what ChatGPT’s Artificial Intelligence thinks.

This is what the Future of Life Institute is spreading::


The Future of Life Institute invites you to sign a 1-sentence statement on superintelligence that just launched today, with remarkably broad support! Theywanted you to have the opportunity to join as an early signatory.The letter has already made headlines around the world, and we want you to be a part of it!

Sign the statementFinancial Times | Steve Bannon and Meghan Markle among 800 public figures calling for AI ‘superintelligence’ banBloomberg | Prince Harry, Geoffrey Hinton Call for Ban on AI SuperintelligenceAxios | AI leaders push to pause superintelligenceThe Guardian | Harry and Meghan join AI pioneers in call for ban on superintelligent systemsTIME | ‘Time Is Running Out’: New Open Letter Calls for Ban on Superintelligent AI DevelopmentThe Conversation | AI heavyweights call for end to ‘superintelligence’ researchCNBC | Hundreds of public figures, including Apple co-founder Steve Wozniak and Virgin’s Richard Branson urge AI ‘superintelligence’ ban📹 Fox & Friends [VIDEO] | Tech leaders urge ban on superintelligence: ‘We’re not heading in a good direction’Plus plenty in Brazil, India, and more…A bit more about the statement:
AI tools may bring unprecedented health and prosperity.
However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.
It will be terrific if you can join us, because it will help create common knowledge of the growing number of people who oppose a premature rush to human disempowerment (as opposed to new helpful and controllable and AI tools).
Many thanks!Sign the statementProf. Max Tegmark
President, Future of Life Institute
Institute for Artificial Intelligence & Fundamental Interactions
Center for Brains, Minds and Machines
Department of Physics
Massachusetts Institute of TechnologyFLI is a 501c(3) non-profit organisation, meaning donations are tax exempt in the United States. If you need our organisation number (EIN) for your tax return, it’s 47-1052538. FLI is registered in the EU Transparency Register. Our ID number is 787064543128-10.
Sent to: ahoyos@faculty.ie.edu


Future of Life Institute , 933 Montgomery Avenue, #1012, Narberth, Pennsylvania 19072, United States
Adriana Hoyos 
Professor of AI Economics, Digital Ecosystems & Geopolitics +34.628822112
ahoyos@faculty.ie.edu
www.linkedin.com/in/adrianahoyos1https://www.ie.edu/insights/authors/adriana-hoyos/Outlook-djsad31o.png

Access the MILLPROJ Home Page and Archives

Unsubscribe from the MILLPROJ List


Log In

22,426 signaturesSign statement

Including 19,220 from the same petition by Ekō

Statement on Superintelligence

Context: Innovative AI tools may bring unprecedented health and prosperity. However, alongside tools, many leading AI companies have the stated goal of building superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks. This has raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction. The succinct statement below aims to create common knowledge of the growing number of experts and public figures who oppose a rush to superintelligence.

For corrections, technical support, or press enquiries, please contact letters@futureoflife.org

Statement

We call for a prohibition on the development of superintelligence, not lifted before there is

  1. broad scientific consensus that it will be done safely and controllably, and
  2. strong public buy-in.

All (3,206)Faith LeaderPolicymakerArts & MediaResearcherBusinessNon-profit

Geoffrey Hinton

Emeritus Professor of Computer Science, University of Toronto, Nobel Laureate, Turing Laureate, world’s 2nd most cited scientist

Yoshua Bengio

Professor of Computer Science, U. Montreal/Mila, Turing Laureate, world’s most cited scientist

“Frontier AI systems could surpass most individuals across most cognitive tasks within just a few years. These advances could unlock solutions to major global challenges, but they also carry significant risks. To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future.”

Stuart Russell

Professor of Computer Science, Berkeley, Director of the Center for Human-Compatible Artificial Intelligence (CHAI); Co-author of the standard textbook ‘Artificial Intelligence: a Modern Approach’

“This is not a ban or even a moratorium in the usual sense. It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”

Steve Wozniak

Co-founder of Apple

Sir Richard Branson

Founder, Virgin Group

Steve Bannon

Fmr Executive Chairman of Breitbart News; fmr chief strategist to President Donald Trump; Host of War Room podcast

Glenn Beck

Founder of Blaze media, radio host, TV personality, political commentator

Susan Rice

Fmr U.S. National Security Advisor under President Obama; U.S. Ambassador to the United Nations; Rhodes Scholar

Mike Mullen

U.S. Navy Admiral (ret), Chairman of the Joint Chiefs of Staff under Presidents George W. Bush and Barack Obama

Joe Crowley

Former Congressman (D) representing New York and House Democratic Caucus Chair

Key polling results on Superintelligence

  • 5%U.S. adults are in support of the status quo of fast, unregulated development
  • 64%believe superhuman AI shouldn’t be made until proven safe or controllable, or should never be made
  • 73%want robust regulation on advanced AI

Polling Results

Comments from signatories (Click to expand)

Previous slideNext slide

Brian Higgins

Former Congressman (D) representing New York

Steve Israel

Former Congressman (D) representing New York

Keith Rothfus

Former Congressman (R) representing Pennsylvania 

Ed Perlmutter

Former Congressman (D) representing Colorado

John Yarmuth

Former Congressman (D) representing Kentucky

Mary Robinson

Fmr President of Ireland; Fmr UN High Commissioner for Human Rights

“AI offers extraordinary promise to advance human rights, tackle inequality, and protect our planet, but the pursuit of superintelligence threatens to undermine the very foundations of our common humanity. We must act with both ambition and responsibility by choosing the path of human-centred AI that serves dignity and justice.”

Prince Harry, Duke of Sussex

Co- Founder, The Archewell Foundation

“The future of AI should serve humanity, not replace it. The true test of progress will be not how fast we move, but how wisely we steer.”

Meghan, Duchess of Sussex

Co- Founder, The Archewell Foundation

Mark Beall

Fmr Director of AI Strategy and Policy, Department of Defense

“When AI researchers warn of extinction and tech leaders build doomsday bunkers, prudence demands we listen. Superintelligence without proper safeguards could be the ultimate expression of human hubris—power without moral restraint.”

Desmond Browne, Lord Browne of Ladyton

Fmr UK Defence Minister, Member of the UK House of Lords

Public statements by non-signatories (Click to expand)

Previous slideNext slide

Andre Hoffman

Vice-chair, Roche, Co-Chair, World Economic Forum

Jon Wolfsthal

Fmr Special Assistant to the President for National Security Affairs

“The discussion over AGI should not be cast as a struggle between so called doomers and optimists. AGI presents a common challenge for all of humanity. We must to ensure we control technology and it does not control us. Until and unless developers and their funders know that a technology with the capacity to be smarter, faster, stronger and just as lethal as humanity cannot escape human control, it must not be unleashed. Ensuring we can enjoy the benefits of AI and AGI requires us to be responsible in its development.”

Christine Rosen

Senior Fellow, American Enterprise Institute

Andrew Yao (姚期智)

Professor & Dean, Tsinghua University, Turing Laureate

Paolo Benanti

Papal AI advisor, Catholic priest, Professor at Pontifical Gregorian University

Johnnie Moore

President, Congress of Christian Leaders, White House evangelical adviser, emeritus Professor at Liberty University

“We should rapidly develop powerful AI tools that help cure diseases and solve practical problems, but not autonomous smarter-than-human machines that nobody knows how to control. Creating superintelligent machines is not only unacceptably dangerous and immoral, but also completely unnecessary.”

Walter Kim

President, National Association of Evangelicals, board member, Christianity Today

“If we race to build superintelligence without clear and morally informed parameters, we risk undermining the incredible potential AI has to alleviate suffering and enable flourishing. We should intentionally harness this amazing technology to help people, not rush to build machines and mechanisms we cannot control.”

Anthony J. Granado

Associate General Secretary, US Conference of Catholic Bishops

Kelly (Monroe) Kullberg

General Secretary, American Association of Evangelicals (AAE), Founder, the Veritas Forum, Author, Finding God at Harvard

Timothy W. Estes

CEO and Founder, AngelQ, Fmr. CEO and Founder, Digital Reasoning. Board member – Mission Link Next

Nnenna Nwakanma

Global AI Ambassador, Chief Web Advocate, Swiss Cognitive, HealthAI, SheShapesAI, World’s Most 100 Influential People in Digital Government; Athena40Women Tech for Good; 100 Most Influential Africans of 2021

Yuval Noah Harari

Author and Professor, Hebrew University of Jerusalem

“Superintelligence would likely break the very operating system of human civilization – and is completely unnecessary. If we instead focus on building controllable AI tools to help real people today, we can far more reliably and safely realize AI’s incredible benefits.”

Daron Acemoğlu

Institute Professor, MIT, Nobel Laureate in Economics, MIT Institute Professor

Ya-Qin Zhang (张亚勤)

Chair Professor & Dean, Institute for AI Industry Research, Tsinghua University, fmr President, Baidu

John Mather

Nobel Laureate in Physics, senior astrophysicist at NASA, NASA Goddard Space Flight Center

Frank Wilczek

Nobel Laureate in physics, Professor of Physics, MIT, ASU, Stockholm U

Beatrice Fihn

Nobel Laureate (Peace Prize), fmr Executive Director of ICAN

Brando Benifei

Member of the European Parliament, AI Act Rapporteur

Martin Rees

Professor, Cambridge University, Co-founder of CSER, Astronomer Royal, member House of Lords

Michael McNamara

Member of the European Parliament

Markéta Gregorová

Member of the European Parliament, Member of the AI Act working group

Jonathan Berry, Viscount Camrose

Fmr UK Minister for AI and Intellectual Property, Member of the UK House of Lords

Leslie Griffiths, Lord Griffiths of Burry Port

Fmr Labour Party whip in the House of Lords, Member of the UK House of Lords, fmr President of the Methodist Conference

Ben Lake

Member of the UK House of Commons

Alex Sobel

Member of the UK House of Commons

Nicholas Fairfax

Lord Fairfax of Cameron, Member of the House of Lords

Philip Hunt, Lord Hunt of Kings Heath OBE

Fmr UK Minister of State for Energy, Member of the UK House of Lords

James Knight, Lord Knight of Weymouth

Member of the UK House of Lords

Paul Strasburger, Lord Strasburger

Member of the UK House of Lords

Beeban Kidron, Baroness Kidron OBE

Member of the UK House of Lords

Joseph Gordon-Levitt

Actor, Filmmaker, Founder, HITRECORD

“Yeah, we want specific AI tools that can help cure diseases, strengthen national security, etc. But does AI also need to imitate humans, groom our kids, turn us all into slop junkies and make zillions of dollars serving ads? Most people don’t want that. But that’s what these big tech companies mean when they talk about building ‘Superintelligence’.”

Sir Stephen Fry

Actor, director, writer

“To get the most from what AI has to offer mankind, there is simply no need to reach for the unknowable and highly risky goal of superintelligence, which is by far a frontier too far. By definition this would result in a power that we could neither understand nor control.”

Will.I.am

Rapper, singer, producer, actor

Yi Zeng (曾毅)

Professor & Dean, Beijing Institute of AI Safety and Governance, TIME 100 AI

Valerie Pisano

President & CEO, Mila

Jimena Viveros

Legal advisor, Mexico’s Supreme Court, member of UN High-Level Advisory Body on AI

Devan Patel

Senior Advisor on Public Policy and Ethics, American Security Fund

Jaan Tallinn

Co-Founder, Skype & Future of Life Institute

Vincent Conitzer

Professor, Carnegie Mellon University and University of Oxford, Author, “Moral AI And How We Get There”; ACM Fellow, AAAI Fellow, Sloan Fellow, Guggenheim Fellow

Moshe Y. Vardi

Professor of Computational Engineering, Rice University, Member: US National Academy of Engineering and National Academy of Sciences

Sharon Li

Associate Professor of Computer Science, University of Wisconsin Madison

Freda Shi

Assistant Professor of Computer Science, Univ. Waterloo

Lan Xue (薛澜)

Dean, Schwarzman College, Tsinghua University, TIME 100 AI

Pierre Baldi

Distinguished Professor of Computer Science, University of California, Irvine, Dennis Gabor Award

Kate Bush

Musician

Adam Oberman

Professor & Canada CIFAR AI Chair, Dept of Mathematics and Statistics, McGill University/Mila/LawZero

Dylan Hadfield-Menell

Assistant Professor, Faculty of Artificial Intelligence and Decision-Making, MIT, AI2050 Early Career Fellow

Max Tegmark

Professor of Physics, Center for AI & Fundamental Interactions, MIT, President, FLI; Time 100 AI

Anthony Aguirre

Executive Director, Future of Life Institute, Professor of Physics, UC Santa Cruz

Meia Chita-Tegmark

Co-founder, Future of Life Institute

Victoria Krakovna 

Co-founder of FLI, AI safety researcher

Dan Hendrycks

Executive Director, Center for AI Safety

Tristan Harris

Executive Director, Center for Humane Technology, filmmaker (the Social Dilemma)

Nate Soares

President, Machine Intelligence Research Institute

“The race to superintelligence is suicidal. Progress shouldn’t be subjected to a public veto, but technologists also shouldn’t flirt with annihilation. Scientific consensus alone is not enough (any more than alchemist consensus in the year 1100 would be enough to guarantee a potion of immortality). The science of making a superintelligence beneficial is nowhere near mature. There’s time pressure and reality may demand bold action from cognizant leaders (without public buy-in), but the public is right to object to the race and right to be wary of technologist consensus in this case.”

Malo Bourgon

CEO, Machine Intelligence Research Institute

Connor Leahy

Co-founder and CEO, Conjecture

Andrea Miotti

Founder and CEO, ControlAI

Mark Nitzberg

Interim Executive Director, International Association for Safe and Ethical AI

Charbel-Raphaël Segerie

Executive Director, Centre pour la Sécurité de l’IA (CeSIA)

Jeffrey Ladish

Executive Director, Palisade Research

David Krueger

Asst. Professor in Machine Learning, Univ. Montreal

Qian Tao

Assistant Professor, TU Delft, Director, Knowledge-Driven AI lab

Gabriel Alfour

Co-founder and CTO, Conjecture

Tegan Maharaj

Assistant Professor in Machine Learning, Mila

Grimes

Artist

Zvi Mowshowitz

Don’t Worry About the Vase

Anqi Liu

Asst. Professor of Computer Science, Johns Hopkins University

George Church

Professor, Harvard Medical School & MIT

Clark Barrett

Professor of Computer Science, Stanford

Olle Häggström

Professor of Mathematical Statistics, Chalmers University of Technology, Sweden

Sally Shrapnel

Associate Professor/Deputy Director, ARC Centre of Excellence for Engineered Quantum Systems, University of Queensland

Peter Vamplew

Professor, Information Technology, Federation University Australia

Zoran Kalinic

University Professor, University of Kragujevac, Member of SAIS and AAAI

José Hernández-Orallo

Professor of AI, Univ. of Valencia

The Anh Han

Professor of Computer Science, Teesside University

Roman V Yampolskiy

Associate Professor, University of Louisville, Author of “AI: Unexplainable, Unpredictable, Uncontrollable”

Daniel Kokotajlo

Executive Director, AI Futures Project, fmr OpenAI researcher; TIME 100 AI

Xerxes Dotiwalla

Google DeepMind

Fabien Roger

Member of Technical Staff, Anthropic

Tao Lin

Member of Technical Staff, Anthropic

Casey Williams

Industry Liaison Officer, The University of Kansas, Red Team member at OpenAI

Leo Gao

Member of Technical Staff, OpenAI

Juan Felipe Ceron Uribe

Research Engineer, OpenAI

Micah Carroll

Member of Technical Staff, OpenAI

Gabriel Wu

Member of Technical Staff, OpenAI

Ramana Kumar

Former Research Scientist, Google DeepMind

Nisan Stiennon

fmr Member of Technical Staff, OpenAI

Jeremy Schlatter

Research Engineer, Palisade Research, former Member of Technical Staff at OpenAI

Sören Mindermann

Scientific Lead, International AI Safety Report

Federico L.G. Faroldi

Professor, Director, Center for Reasoning, Normativity and AI, University of Pavia

Andrew T. Walker

Associate Professor of Christian Ethics and Public Theology, The Southern Baptist Theological Seminary

Riccardo Luna

Columnist & Writer, Corriere della Sera, fmr Italian Digital Champion

Very Rev. Olamilekan Kolade Fadahunsi

Director, Institute of Church & Society, Ibadan, Commissioner, Churches Commission on International Affairs, World Council of Churches, Christian Council of Nigeria/World Council of Churches

Lesmore G Ezekiel

Director, All Africa Conference of Churches

Dr Chinmay Pandya

Chairperson, SAIPR, All World Gayatri Pariwar, India, Bharat Gaurav Awardee, Juror of Templeton Prize, Recipient of VIR Peace Award

Fr. Michael Baggot

Associate Professor of Bioethics, Pontifical Athenaeum Regina Apostolorum

Mwesigwa Fred Sheldon

Bishop, Ankole Diocese, Associate Professor

Thomas Tut

Moderator, South Sudan Presbyterian Evangelical Church

Chris Scammell

CEO, Buddhism & AI Initiative

Brian Green

Director of Technology Ethics, Markkula Center for Applied Ethics, Santa Clara University

Kiranjot Kaur

Member, SGPC a Sikh religious organisation, Published papers, articles on Sikh issues. Interfaith speaker. 

Michael Wear

President and CEO, Center for Christianity and Public Life

Karl Hans Bläsius

Professor of Computer Science, Trier University of Applied Sciences

Joe Allen

Tech editor at War Room

“If superintelligence is achievable and the public buys in, then I’m out.”

Ipke Wachsmuth

Emeritus Professor of Artificial Intelligence, Bielefeld University

Karina Vold

Assistant Professor, University of Toronto

Paul Salmon

Professor, Co-Director of the Centre for Human Factors and Sociotechnical Systems, University of the Sunshine Coast, Fellow of the Queensland Academy of Arts and Sciences, Member of the Australian Research Council College of Experts

Michael Noetel

Senior Lecturer, The University of Queensland

Steve Petersen

Associate Professor of Philosophy, Niagara University

Vincent Fortuin

Principal Investigator in Machine Learning, TU Munich

Samuel Buteau

Fmr Senior AI Safety Researcher, Mila

“Barring an international agreement, humanity will quite likely not have the ability to build safe superintelligence by the time the first superintelligence is built. Therefore, pursuing superintelligence at this stage is quite likely to cause the permanent disempowerment or extinction of humanity. I support an international agreement to ensure that superintelligence is not built before it can be done safely.”

Dan Braun

Member of Technical Staff, Goodfire AI

Oliver E Richardson

Postdoctoral Fellow, Université de Montréal, Mila

Einar Urdshals

Research Scientist, Timaeus

Matthias Georg Mayer

Research Fellow, PIBBSS

Vojtech Kovarik

Postdoctoral Researcher, Czech Technical University

Cole Wyeth

PhD Student, University of Waterloo

Steven Byrnes

Research Fellow, Astera Institute

Vanessa Kosoy

Director of AI Research, The Association for Long Term Existence & Resilience (ALTER)

Puria Radmard

Co-director, Geodesic Research

Kaarel Hänni

Research Scientist, Mila

“If we allow the pursuit of AGI to continue, the human era will end, humans and human institutions will probably be rendered insignificant and powerless, and plausibly simply extinct.”

David Williams-King

Research Visitor, Mila

Jasmina Urdshals

AI Safety Researcher

Abram Demski

Researcher, AFFINE, fmr MIRI researcher

Lucius Bushnaq

Member of Technical Staff, Goodfire AI

Jeremy Gillen

fmr MIRI researcher

Max Harms

Researcher, MIRI

Felix Harder

Independent Researcher

Nolan Smyth

Postdoctoral Researcher, Université de Montréal, Mila

Linh Le

Postdoctoral Researcher, McGill University

Eduard Habsburg

Diplomat

Tsvi Benson-Tilsen

fmr MIRI researcher

“Humanity does not currently have the technical understanding required to built superhuman AI without the AI killing everyone, and that technical research is going far more slowly than research toward building unsafe superhuman AI. So, efforts to build superhuman AI should be stopped through laws, international treaties, social norms, professional rules, and by providing alternatives ways to gain the benefits that would supposedly come from making superhuman AI.”

Mateusz Bagiński

Independent Researcher

“Attempting to build systems more cognitively capable than humans without having an adequate understanding of how they work is an insane endeavor. Barring a mature and adequate understanding of cognition, we cannot ensure that they will have robustly good effects. Rather, it will mark a grim end of humanity. We do not know when someone might succeed in building it, but even if we have decades, figuring out and implementing the coordination mechanisms required to remove the possibility of building superintelligence is going to take time. Therefore, the time to act is Now.”

Mikhail Samin

Executive Director, AI Governance and Safety Institute

Simon Skade

AI Alignment Researcher, Independent

Charles Steiner

AI Safety Researcher

Johannes C. Mayer

Independent Researcher

Luke McNally

Senior Lecturer, University of Edinburgh

Cameron Tice

Co-Director, Geodesic Research, Marshall Scholar

Matthew Crawford

Senior Fellow, Institute for Advanced Studies in Culture, NYT best selling author 

Alex Altair

AI Safety Researcher & Founder, Dovetail Research, fmr MIRI fellow

Thomas Cunningham

AI Economist, METR

Robert Miles

Independent Science Communicator

Philip Lee

CEO, World Association for Christian Communication, DD honoris causa

Jonathan Engel

Distinguished Professor of Physics and Astronomy, University of North Carolina

Yevgeny Liokumovich

Associate Professor, University of Toronto

Jordan Crandall

Professor, University of California, San Diego

Gerry Tsoukalas

Associate Professor, Boston University, Thinkers50 Radar 2025, Senior Fellow at the Wharton School

Klaus Bernhard Bærentsen

Associate Professor Emeritus, Department of Psychology, University of Aarhus

Sol Bermann

Adjunct Clinical Assistant Professor, School of Information, University of Michigan

Arthur E Wilmarth Jr

Professor Emeritus of Law, George Washington University, Member, Int’l Advisory Bd., Journal of Banking Regulation

Frank Stajano

Professor of Security and Privacy, University of Cambridge

Larry Lessig

Roy L. Furman Professor of Law and Leadership, Harvard Law School

Audrey Tang

Senior Fellow, Institute for Ethics in AI, University of Oxford

Dr. Fadi Salem

Director of Policy Research Dept., Mohammed Bin Rashid School of Government in the UAE

Michael James Carey

Distinguished Professor (Emeritus) of Computer Science, University of California, Irvine, Member, National Academy of Engineering

Alistair Knott

Professor of AI, Victoria University of Wellington, Co-founder of the Centre for AI and Public Policy

Dino Pedreschi

Professor of Computer Science, University of Pisa, Director of Human-centered AI Next Gen EU project

Tom Gray

Visiting Professor of AI and Innovation, Ulster University, Cofounder of Software NI, Founder of AICon, former Group CTO of Kainos

Russ Vince

Emeritus Professor, University of Bath

Matt MacDermott

AI Safety Researcher, LawZero

Fabio Morandín Ahuerma

Research Professor in Philosophy of Artificial Intelligence, Benemérita Universidad Autónoma de Puebla

Timothy Kircher

Hege Professor of History, Guilford College

Daniel Volk

Assistant Professor, Deggendorf Institute of Technology

Piotr Bańbuła

Associate Professor, Warsaw School

Jorge Marques da Silva

Professor of Plant Biology, University of Lisbon

Frank Kappe

Professor, Institute of Human-Centred Computing, Graz University of Technology

Miguel Pizaña

Professor, Universidad Autónoma Metropolitana, SNI 2 (National System of Researchers, level 2)

Nicola Bellotto

Associate Professor, University of Padua, Dept. of Information Engineering

Simon S Riley

Research Director, Shore

Akbar Sayeed

Professor emeritus, UW-Madison; Independent Researcher & Consultant, AMS 360

Dario Weissmann

Assistent Professor of Mathematics, Impan

Torbjörn Lundh

Professor, Chalmers University of Technology, President of Unitech International

Michal Pechoucek

Professor, Czech Technical University in Prague, Founder/Director of CTU AI Center, Investor in AI Security with Evolution Equity Partners

Ole-Morten Midtgård

Professor, Norwegian University of Science and Technology

Ingmar de Vries

Assistant Professor Cognitive Neuroscience, University of Trento

Fisseha Mekuria

Chief Scientist-CSIR & Asst. Professor, Malmö University/Dept of DVMT, Recipient of the Innovation Award National Science & Technology Agency-South Africa

Eric Santanen

Professor of Information Systems, Bucknell University

Łukasz Święcicki

Assistant Professor, University of Siedlce

Lamberto Coccioli

Professor of Music and Technology, Birmingham City University, Director of the CreaTech Frontiers creative industries cluster

📣 Join these people, show your support!

3,206signaturesSign statement

Hosted by  Future of Life Institute

Powered by

Privacy PolicyTerms of Use


This is what ChatGPT thought.

We asked ChatGPT, the artificial intelligence company from Sam Altman’s OpenAI, in which Microsoft has invested billions of dollars, for his opinion.

He told us:

Do you want me to do a full analysis—for example:

  • whether the approach is more technophobic, ethical, or forward-looking,
  • what arguments are missing or could be strengthened,
  • how it compares to current debates about superintelligence and algorithmic control,

and whether the article is analytically sound or more speculative.

Do you want me to do that kind of analysis?

We said yes, and this is what he said:

Excellent, thanks for copying it. I’ll give you a comprehensive critical analysis divided into five axes: epistemological, political, economic, ethical, and forward-looking.

🧠 1. Epistemological Axis — What the text assumes as “scientific truth”

The declaration starts from a strong and speculative assumption: that within a decade, “superintelligence” could surpass humans in almost all cognitive tasks.

  • This claim is not scientifically proven. No current AI architecture exhibits consciousness, intentions, or general autonomy.
  • The statement confuses cognitive performance with general intelligence — two very different things. Current models are predictive, not intentional.
  • It also lacks a clear definition of “safety”: does it mean technical safety, ethical safety, legal control, or existential risk prevention?
    In short, its foundation is more precautionary and visionary than empirical.

⚖️ 2. Political Axis — Who speaks and why

The list of signatories mixes scientific authorities (Hinton, Bengio, Russell) with media and political figures (Rice, Bannon, the Sussexes, etc.).

  • This suggests a broad but fear-driven coalition, not a coherent scientific consensus.
  • The appeal to a “scientific consensus” before continuing development is a political tool for moratorium, not a realistic scientific condition.
  • It reflects a shift from governance to symbolic regulation by moral pressure, typical of transitional technological moments.

💰 3. Economic Axis — Who benefits if AI development is halted

Despite its ethical tone, the declaration hides an economic and geopolitical struggle within digital capitalism:

  • Strict regulation benefits established tech giants who can afford compliance, while small labs and Global South countries are left behind.
  • Thus, the moratorium serves as a protective barrier for those already dominant in AI research.
  • It’s as much about power redistribution as about ethics.

⚙️ 4. Ethical Axis — Between precaution and moral panic

The declaration invokes universal values (“dignity,” “freedom,” “justice”) but provides no concrete framework for achieving them.

  • Ethically, it relies on the precautionary principle — don’t proceed until safety is proven.
  • However, it borders on technopanic and moral paternalism by framing AI as an autonomous threat rather than a human creation under governance.
  • It also neglects current ethical challenges such as algorithmic bias, surveillance, and job displacement, in favor of hypothetical apocalyptic risks.

🔮 5. Prospective Axis — Likely consequences and scenarios

If applied literally, the proposal could lead to:

  1. Technological paralysis in democratic nations while rivals advance.
  2. Epistemic inequality, concentrating AI knowledge in a few hands.
  3. Displacement of real debates — from practical governance to metaphysical fear of extinction.

🧩 Overall Assessment

  • Strengths: Raises awareness, legitimizes public concern, and calls for shared responsibility.
  • Weaknesses: Conceptual vagueness, elitist bias, lack of technical roadmap, and apocalyptic tone.
  • Balance: More of a moral and political signal than a scientific strategy — valuable for provoking reflection, but insufficient for governance.

Deja un comentario

Descubre más desde La Carpeta Púrpura

Suscríbete ahora para seguir leyendo y obtener acceso al archivo completo.

Seguir leyendo

🌱 Hoy el ciclo cambia: consulta tu energía del día en Mi Visor Entrar a Mi Visor →