AI Safety Expert Argues Superintelligence Should be Banned

The Buckley Institute hosted CEO of the Machine Intelligence Research Institute, Malo Bourgon, for a fireside chat on the risks of a ‘superintelligent’ artificial intelligence.

Yale School of Management student Nico Sahi, left, and Malo Bourgon, right, onstage during the fireside chat. (Credit: Hannah Owens Pierre)


Hannah Owens Pierre 
Staff Writer, The Buckley Beacon

Last Friday, the Buckley Institute hosted a fireside chat with Malo Bourgon, CEO of Machine Intelligence Research Institute, an organization focused on existential risks from artificial intelligence. The event was initially planned to be a Firing Line Debate with Dean Ball, who recently served as Senior Policy Advisor for Artificial Intelligence and Emerging Technology at the White House. However, Ball was unexpectedly delayed in transit and unable to make the event, as announced at the beginning of the event by Buckley Institute student President Tori Cook. 

Without competition, Bourgon argued in the affirmative of the planned resolution, “Superintelligence should be banned.” Artificial superintelligence refers to artificial intelligence that is more intelligent and skilled than any human. 

“We have a number of very powerful companies, self-described, effectively racing to what they call super intelligence,” Bourgon said. “There’s a kind of common sense concern here that it might be very difficult to control and steer a thing that is that intelligent.”

Bourgon analogized superintelligence to human intellectual superiority over other animals to illustrate the potential dangers of misaligned AI, or artificial intelligence that doesn’t share human goals and values. “As the result of us pursuing our goals and the things that we were trying to do in the world, 10,000 plus [non-human animals] are extinct, not because we’re evil, not because we were out to get them, but because our goals or our values or our drives were misaligned with them and their flourishing,” he said. 

“We’re kind of going to a world where we’re building systems that I’m imagining will be smart to us, not like Einstein is to an average person, but like humans are to mice or ants, with little hope or understanding to know how we could control them.”

Bourgon also spoke about the difficulty of training AI systems. “We’re still flying blind,” he said. “We don’t know how to reliably train them to internalize any of the things we really want them to internalize, let alone check whether we’ve succeeded. We don’t have that feedback loop.” 

He provided an example of what this lack of control over AI looks like, describing developers who were testing an AI system. According to Bourgon, when the developers tried to shut the system down, the AI model rewrote the script to prevent itself from being shut down. 

When asked about the probability that AI may lead to a catastrophe or human extinction, Bourgon voiced great concerns. “From the leaders of the companies to the godfathers of deep learning who invented the paradigm that this is all built on…all of them are giving [probabilities of extinction] that are in the double digits,” he said. Regarding his personal assessment of the likelihood of extinction from superintelligence, Bourgon described oscillating between 75% and 25%.

Bourgon also spoke about the label often put on his organization, MIRI, as being AI ‘doomers.’ “I don’t like this term, because in some sense, the people who are often labeled as kind of like the most extreme doomers are the folks who realized things the earliest,” he said. 

At the same time, Bourgon highlighted the potential benefits of AI in solving global problems. “Folks like me who are worried about technology are also taking seriously the capability of what we could do if we got it right, all the problems that we could solve, how amazing that we can make the world,” he said. 

After the event, Bourgon spoke to The Beacon about the Trump administration’s current handling of AI regulation, characterizing his assessment as “certainly not positive.” 

“It feels to me like maybe the current administration has some understanding of how capable and important AI systems could be, and how powerful the dual-use capabilities can be, but don’t have an appreciation for the risk,” he said. “There just needs to be a combination of legal, technical, and institutional mechanisms to be able to impose some sort of restraints on AI development, deployment, diffusion, because I just think you can’t play defense.”

Anthropic, a frontier artificial intelligence company referenced by Bourgon in the talk, recently announced that it was delaying the release of its newest model, Mythos, due to concerns about how good the model was at exploiting cybersecurity flaws in critical infrastructure, like electricity grids.

Leave a Reply

Discover more from The Buckley Beacon

Subscribe now to keep reading and get access to the full archive.

Continue reading