A pause on superintelligent AI? A debate in the UK House of Lords
People who casually dismiss the concerns will soon be in the fringe
In January 2026, The UK House of Lords debated whether the UK should push for an international moratorium on superintelligent AI.
I cannot believe this happened. And it is not surface level lip-service. They have done their background reading, and they are not beating around the bush about the catastrophic risks from superintelligent AI. If you have previously dismissed these concerns as science fiction fear-mongering or marketing hype, this should at least make you consider engaging with the arguments and the facts of the matter.
I have collected the quotes that have some substance to it. The fact that there are so many relevant quotes indicates this was a high density discussion. I highly recommend readers at least do a skim: to help do a skim, I put in bold the quotes that I think are particularly noteworthy.
If you do not know why superintelligent AI is considered a huge risk by many people, then reading these quotes will give you quick overview of key beliefs and risks that people worry about, as well as some arguments / facts of the matter.
If you are already aware of risks of ASI, then you will not learn any new arguments but it is worth knowing what some people in power are aware of and saying.
Lord Hunt
To ask His Majesty’s Government what plans they have to bring forward proposals for an international moratorium on the development of superintelligent AI.
we cannot ignore the huge risks that superintelligent AI—or ASI, as I will call it—may bring. I am using this debate to urge the Government to consider building safeguards into ASI development to ensure that it proceeds only in a safe and controllable manner, and to seek international agreement on it
Among the risks, [Dario Amodei] pointed out, is the potential for individuals to develop biological weapons capable of killing millions or, in the worst case, even destroying all life on earth.
The UK AISI said that “AI systems also have the potential to pose novel risks that emerge from models themselves behaving in unintended or unforeseen ways. In a worst-case scenario, this unintended behaviour could lead to catastrophic, irreversible loss of control over advanced AI systems”.
[AI companies] have made racing to develop superintelligent AI their explicit goal, with each company feeling compelled to move faster precisely because their competitors are doing the same. So I call on the Government to think through the need not just for a moratorium on development but for some international agreement. These are not exactly fertile times to propose international agreements, but the fact is that countries are still agreeing treaties and the case is so strong that we must start discussing this with our partners.
Baroness Neville Jones
It is not often that we face choices so stark and so difficult, and where on the one hand there is an immense benefit to be gained and on the other a catastrophe for humanity. That is the situation that we are in, so we have to take this issue very seriously.
I do not think that [a moratorium] will be possible in the short term. Frankly, while President Trump is in the White House, the US is not going to regulate the development of AI, nor will it help others do that; in fact, it is much more likely that it will stand in the way.
Lord Strasburger
ASI could be controlled by a small group of humans who could use it to concentrate economic and political power
Another grave risk is totalitarian surveillance and control, allowing states, corporations, or even ASI itself, to lock in a highly repressive global regime for generations. ASI might design advanced weapons, accelerate a military arms race or trigger accidental or intentional large-scale conflict, including nuclear war. Superhuman hacking skills could allow it to seize computer networks, financial systems, power grids and communications channels, making it extremely hard for humans to ever regain control.
Advanced ASI tools could make it easier to design lethal pathogens, lowering the skill barrier for bioterrorism or enabling a misaligned ASI to use biological threats as leverage or as an attack vector. By misaligned, I mean systems whose goals have been changed so that they no longer align with the interests of the human race. Many AI experts consider such scenarios possible, not mere science fiction. A misaligned ASI might pursue its goals in ways that sideline or even eliminate humans if it decided that we were an obstacle.
One route is a so-called intelligence explosion, where an advanced system recursively improves its own algorithms and designs better successors, increasing its capabilities so rapidly that humans cannot intervene in time. Another is the emergence of power-seeking behaviour, where an ASI learns that gaining resources, influence and protection from shutdown helps it to achieve its long-term goals and does just that.
A 2022 survey of AI researchers found that a majority assigned at least a 10% chance to the risk that an ASI could cause an outcome as bad as human extinction. Reviews of expert reviews suggest a 5% to 20% probability of an existential catastrophe. These are not zero: they are not even near zero. They are very far from trivial. Even a 1% risk would be unthinkable in aviation or the nuclear industry.
A moratorium and binding international regulation of ASI is, frankly, our only hope, however hard it will be to agree. It will be even harder to enforce, but we have to do it; there is no choice. In the words of the godfather of AI, Geoff Hinton, who has now dedicated himself to warning the world about the dangers posed by his life’s work, “It’s a good time to be 76”. Let us hope that his warning and those of many others are heeded, and that catastrophe is averted.
Lord Tarassenko
My own view, after talking with colleagues in the AI Security Institute and the Alan Turing Institute, is that a moratorium would be unenforceable.
Instead, I support the proposal made this week by the noble Baroness, Lady Harding, to set up a commission to investigate the ethical aspects of general ASI. The commission could be facilitated by the Alan Turing Institute and would consult a range of experts.
In the meantime, we should consider the transition from level 1 to level 2, which is much closer. General AGI carries real risks. The Minister highlighted on Monday the regulation of AI for specific fields—for example, through the MHRA for healthcare. That is an approach I welcome for narrow AI or even narrow AGI. But what we need now is for the Government to initiate a consultation process for the regulation of general AGI, which is likely to be attained by the next generation of frontier models.
Safety testing of models by the AI Security Institute at present relies on voluntary agreements with AI companies. The consultation should therefore also consider the pros and cons of putting AISI, the AI Security Institute, on a statutory footing and legally compelling AI companies to open up their models for safety testing. I very much hope that the Minister will be able to tell us when DSIT is likely to announce a consultation on regulating general AGI.
The Lord Bishop of Hereford
Early experimental examples of super AI have prioritised their own survival, even to the extent of threats of blackmail to their programmers when it was proposed to switch them off. Your Lordships demonstrate in this House a combination of the intelligence, wisdom and love, and deliberating in community that are the heart of our humanity and mutual relationships. Until such time as these virtues can be woven into machines, with the protections to shut them down safely, an international moratorium is the only safe way forward, and I would urge His Majesty’s Government to pursue it.
Lord Stevenson of Balmacara
[I do not know what Stevenson is alluding to when they say the real purpose is being concealed.]
I congratulate my noble friend Lord Hunt on securing this important debate and concealing the real purpose of it in a rather confusing title. I also want to declare that this speech was made entirely by myself and my brain, and I have not consulted any other agency, alive or automatic.
I hope that, when the Minister responds, she will confirm that the Government have no plan to suppress the development of ASI. Inquiry and discovery are deeply ingrained in the human psyche and the AI revolution we are living through should certainly not be suppressed. As we know, however, AI is already disrupting traditional media ecosystems and current regulatory arrangements are struggling. How are we going to regulate AI? That is the key question.
I hope she can say a little more about how the Government intend to regulate in this area, building on the AI Security Institute and supporting the pro-growth agenda.
Baroness Foster of Aghadrumsee
It is the second debate on this matter that has taken place in your Lordships’ House within a month. I think that says a lot about the concern that is growing around this issue.
I read recently that Anthropic’s AI model was used to conduct a Chinese state-sponsored cyber attack, with 80% to 90% of tasks conducted autonomously by the AI system. As risks from advanced AI do not respect boundaries, this is a global challenge that requires co-ordinated solutions at international level. I am concerned that we are not doing enough to be risk aware, and that the Government are adopting a “wait and see” approach rather than leading on international arrangements. I hope the Minister will be able to set out a plan for international Governments to deal with the risks of superintelligence: that is, systems that would be capable of outsmarting experts, compromising our national security and upending international stability even more than it has been upended already.
Recently, 800 prominent figures and more than 100,000 members of civil society came together to sign a statement calling for a prohibition on superintelligence until there is scientific and public consensus. I hear what noble Lords have said today about the difficulties around that, but even the CEOs of leading AI companies have an appetite for this. The CEO of Google DeepMind, based here in the UK, said last week at Davos that he would support a halt in AI development if every other country and company agreed to do so.
Geoffrey Hinton said last week on “Newsnight” that there was a need for international regulation to stop AI being abused. He, like the noble Lord, Lord Hunt, pointed to the Geneva convention on the use of chemical weapons as a template for international action. Despite the fact that we are living through difficult geopolitical times, it is important that that does not stop us from starting the process of looking at these issues.
I urge the Minister to formally acknowledge extinction risk from superintelligent AI as a national security priority and to lead on international efforts to prohibit superintelligence development.
Lord Patel
In 1637 René Descartes said, “I think, therefore I am”. That is what we fear: that ASI will be able to think by itself, and therefore it will be. We fear that it will develop lethal weapons that we cannot control, let alone understand their development. I agree with that. So do all the tech company CEOs who discussed this at length at the Davos meeting and subsequently on different podcasts. So did Yuval Harari from Cambridge, a political reporter and philosopher who has identified the issues that will confront us if AGI leads to ASI.
AI is the next step to AGI, and, as the noble Lord, Lord Tarassenko, said, AGI is the next step to ASI. We are probably closer to level 2 of AGI, but the timelines are long. We are uncertain when we will get to ASI, particularly recursive ASI. If we get to that point, that will be when we have the greatest danger.
I come from the position of saying that moratoriums will not work. But we can work in co-operation with other nations that have already started regulating, such as South Korea and Australia, as well as work with our AI Security Institute in the United Kingdom, to establish our own boundaries through regulations that will allow innovations to continue.
The conundrum is how to allow technology to develop these benefits while creating regulations that will not allow it to develop in areas that are dangerous to humanity.
The way forward on how we govern technology will be in how we identify its consciousness and how we work with it. Therefore, as we learn more, measured regulation and co-operation with other countries is probably the way forward.
Lord Clement-Jones
Superintelligence—AI surpassing human intelligence across all domains—is the explicit goal of major AI companies. Many experts predict that we could reach this within five to 10 years. In September 2025, Anthropic detected the first large-scale cyber espionage campaign using agentic AI. Yoshua Bengio, one of the godfathers of AI development, warns that these systems show “signs of self-preservation”, choosing their own survival over human safety.
Currently, no method exists to contain or control smarter-than-human AI systems. This is the “control problem” that Professor Stuart Russell describes: how do we maintain power over entities more powerful than us? That is why I joined the Global Call for AI Red Lines, which was launched at the UN General Assembly by over 300 prominent figures, including Nobel laureates and former Heads of State. They call for international red lines to prevent unacceptable AI risks, including prohibiting superintelligence development, until there is broad scientific consensus on how it can be done safely and with strong public buy-in.
ControlAI’s UK campaign, described by the noble Lord, Lord Hunt, is backed by more than 100 cross-party parliamentarians in the UK. Its proposals include banning deliberate superintelligence development, prohibiting dangerous capabilities, requiring safety demonstrations before deployment, and establishing licensing for advanced AI.
The Montreal Protocol on Substances that Deplete the Ozone Layer offers a precedent. In 1987, every country signed it within two years—during the Cold War. When threats are universal, rapid international agreements are possible. Superintelligence presents such a threat. Yet the current situation is discouraging. The US has rejected moratoria. Sixty countries signed the Paris AI Summit declaration in February 25, but the UK did not. Even Anthropic’s CEO, who has been widely quoted today, admits that we understand only 3% of how current systems work. Today, AI systems are grown through processes their creators cannot interpret.
Once a system surpasses human intelligence across all domains, we cannot simply regulate how it is used. We will have lost the ability to control it at all. You cannot regulate the use of something more intelligent than the regulator just sector by sector.
Our AI Security Institute, as the noble Lord, Lord Tarassenko, pointed out, has advisory powers only. We were promised binding regulation in July 2024, but we have seen neither consultation nor draft legislation. Growth and safety are not mutually exclusive. Without public confidence that systems are under human control, adoption will stall.
It is clear what the Government should do. The question is whether we will act with the seriousness this moment demands or whether competitive pressures will override the fundamental imperative of keeping humanity in control. I look forward to the Minister’s response.
Response from the Minister
The Parliamentary Under-Secretary of State, Department for Business and Trade and Department for Science, Innovation and Technology
[They do not directly mention superintelligence or catastrophic risks... :/ Closest is this paragraph]
One of the AISI’s priorities is tracking the development of AI capabilities that would contribute to AI’s ability to evade human control, which has been raised many times in the debate today. The institute works with the Home Office, NCSC and other national security organisations to share its evidence for the most serious risks posed by AI.
==
[Instead, they mostly discuss all the ways they are investing in AI in the UK to ensure the UK gets the benefits. Only pays lip service to reducing risks.]
I shall close by talking about the importance not only of the UK taking the risks of AI seriously, but of our conviction that it will be a driver of national renewal, and of our ambition to be a global leader in the development and deployment of AI. This is the way that will keep us safest of all. Our resilience and strategic advantage are based on our being competitive in an AI-enabled world. It matters who influences and builds the models, data and AI infrastructure.
That is why we are supporting a full plan, including our sovereign AI unit, which is investing over £500 million to help innovative UK start-ups expand and seed in the AI sector. It is why we are progressing the infrastructure level, including the announcement of five AI growth zones across the UK, accelerating the delivery of data centres. It is why we are expanding National Compute and why we are equipping all people—students and workers—with digital and AI skills. We want to benefit from AI’s transformative power, so we need to adopt it as well as manage its risks. That is why we have also committed to looking at the impact of AI on our workforce through the AI and future of work unit. We are working domestically and collaborating internationally to facilitate responsible innovation, ensuring that the UK stands to benefit from all that AI has to offer.

It is now clear that governments won't lead on this since the incentives simply don't align. We've seen the same movie before with climate change.
What do you propose could be the realistic path for ordinary people who take this seriously?