More than 33,000 people—including a hall of fame of AI experts—signed a March 2023 open letter calling on the tech industry to pause development of AI models more powerful than OpenAI’s GPT-4 for six months rather than continue the rush toward Artificial General Intelligence, or AGI. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” stated the letter, which was spearheaded by an organization called Future of Life Institute.
Spoiler: The industry didn’t heed the letter’s call. But it did generate tremendous publicity for the case that AGI could spiral out of human control unless safeguards were in place before they were actually needed. And it was only one of many initiatives from the decade-old institute designed to cultivate a conversation around AI’s risks and the best ways to steer the technology in a responsible direction.
At the Web Summit conference in Lisbon, I caught up with Future of Life cofounder and president Max Tegmark, an MIT professor whose day job involves researching “AI for physics and physics for AI.” We spoke about topics such as the open letter’s impact, why he thinks AI regulation is essential (and not that difficult to figure out), and the possibility that AGI will become real during Donald Trump’s upcoming presidential term—and how Elon Musk, who is an FLI advisor, might help the administration deal with that possibility in a constructive manner.
This interview has been edited for clarity and length.
Is the work you’re doing accomplishing what you hope it will, even if the AI industry doesn’t pause for six months?
My goal with spearheading that pause letter was not that I thought that there was going to be a pause. Of course, I’m not that naive. The goal was to mainstream the conversation and make it socially safe for people to voice their concerns. I actually feel that’s been a massive success.
It certainly got a lot of attention.
A lot of stakeholders who didn’t say much about this have now started speaking about it. I felt there was just a lot of pent-up anxiety people had all across society. Many felt afraid of looking stupid by talking about this, or afraid of being branded as some kind of Luddite scare monsters. And then when they saw that here are all these leaders in the field also expressing concerns, it became socially validated to anyone else who also wanted to talk about it. And it brought into the open the possibility of having that other extinction letter, and then that, in turn, I think, very directly led to there being hearings in the Senate and the international AI summits, the AI safety institutes starting to exist, and stuff like that, which are all very welcome.
I’ve been feeling kind of like Lone Wolf McQuaid for the past 10 years, where was really no uptake among power makers. And that’s completely changed.
I was impressed by the fact that people within large organizations were willing to be attached to your letter even if their bosses were not.
But the extinction letter in May of 2023 was actually signed by all the bosses. I viewed it a little bit as a cry for help from some of those leaders, because it’s impossible for any company to unilaterally pause. They’re just going get crushed by the competition. The only good outcome is that there are safety standards put in place that level the playing field for everyone.
Like if there were no FDA, no pharma company could just unilaterally pause releasing drugs. But if there’s an FDA and nobody can release the new drugs until they’ve gone through FDA approval, there’s an automatic pause on all unsafe drugs. And it really changes the incentive structure in a big way within the industry, because now they’ll start investing a lot of money in clinical trials and research and they’ll get much safer products as a result.
We have that model in basically every industry in America. We have it not just for drugs, we have it for cars, we have it for airplanes, we even have it for sandwich shops. We have to go get the municipal health inspector to check that you don’t have too many rats in the kitchen. And AI is completely anomalous as an industry. There’s absolutely no meaningful safety standards. Sam Altman, if his own prediction comes true and he gets AGI soon, he’s legally allowed to just to release it to see what happens.
How clear an understanding do you have of what the regulations should be, especially given that AI is moving faster than sandwich shop safety or even drugs?
I think that’s actually pretty easy. The harder thing is just having the institutional framework to enforce whatever standards there are. You bring together all the key stakeholders and you start with a low bar, and then gradually you can raise it a bit.
Like with cars, for example, we didn’t start by requiring airbags or anti-lock brakes. We started with seat belts and then some people started putting up some traffic lights and some speed limits. It was really basic stuff and then kind of went from there. But already, when the seatbelt law came in, it had a transformative impact on the industry because deaths went down so dramatically that they started selling way more cars.
And I think with AI, it’s kind of the same. There’s some very basic stuff that’s pretty uncontroversial that you would have in the standards. AI, your products cannot teach terrorists how to make bioweapons. Who’s going to be against that? Or to demonstrate that this is not some sort of recursively self-improving system that you can lose control over.
But even if you just start with that, it would automatically pause AGI from being released. And we would see then, I think, a re-shifting of focus to all sorts of wonderful tools: AI curing diseases, solving more suffering, you name it, where it clearly meets those standards. I’m very optimistic about so much of what’s being discussed here at Web Summit, how much benefit it can have. And the angst is coming mainly just from the loss of control stuff because the timelines have become so short. So if you can just make sure that AGI gets the equivalent of its clinical trial, there’ll be a refocusing on all the good stuff.
To be best of your ability, can you guess ahead to whether the new administration is going to have any effect on this?
I think it depends on the extent to which Donald Trump will listen to Elon Musk. On one hand, you have a lot of folks who are very anti-regulation trying to persuade Trump to repeal even Biden’s executive order, even though that was very weak sauce. And then on the other hand, you have Elon, who’s been pro AI regulation for over a decade and came out again for the California regulation, SB 10 47. This is all going to really come down to chemistry and then relative influence.
In my opinion, this issue is the most important issue of all for the Trump administration, because I think AGI is likely to actually be built during the Trump administration. So during this administration, this is all going to get decided: whether we drive off that cliff or whether AI turns out to be the best thing that ever happened.
If we lose control of AGI, frankly, none of the other stuff matters. The extinction letter that was signed, people weren’t joking when they talked about human extinction. They were very serious about it. And I have opinions about all the other politics, and you of course do also, but we can leave that all out of this. None of the other political issues where Trump and Kamala Harris differed are relevant if we’re extinct. or even if we’re not extinct, but we’ve lost control to some new self-improving robot species.
Of course, everyone’s trying to persuade Trump to go this direction or that direction or whatever. But if Trump ends up pushing for AI safety standards, I think Elon’s influence would probably be a key reason.
’I call it digital neuroscience’
What else is cooking in terms of recent developments that impact your work?
I still have two hats. My day job is just working as a nerd at MIT doing AI research for my group for many years. But we’re focusing very much on technical stuff related to AI control and trust. The software on your laptop is mostly written by [humans], so we know how it works. With Large Language Models, we’ve gotten accustomed to the fact that we have no clue how they work. And of course, no one in OpenAI knows how they work either.
But actually there is some really encouraging news where we’re beginning to understand a lot more how they work. I call it digital neuroscience, just like you can take your brain that’s doing all sorts of smart stuff and try to figure out how it works. And we still don’t understand how your brain works, but there’s some aspects of it we almost completely understand now, like your visual system.
With artificial neuroscience, the progress has been vastly faster. I organized the largest conference on this to date about a year and a half ago at MIT. And since then, the field has really exploded. The reason it’s going so much faster than normal neuroscience is because in normal neuroscience, you have a hundred billion neurons and you’re hard-pressed to measure more than a thousand at a time. And there’s noise and you have to go to the IRB for ethics approval before you stick electrodes in people’s brains.
Whereas when it’s a digital intelligence, you can measure every single neuron all the time, and you can measure the synapses, and there’s no ethics board. It’s the scientist’s dream for understanding the system, and progress is so fast. So now we’re mapping out how all the different concepts are stored in the AI, and starting to figure out how it does certain interesting tasks.
It can be useful to assess a little bit how much you should trust the AI. It can also be used to assess what the AI actually knows and thinks. We wrote a paper, for example, where we were able to build a lie detector: We could see what the AI thought believed was true and false, and then we could compare that with what it claimed was true and false to see if it was lying or telling the truth. So these are baby steps, but there’s a growing community here making progress really, really fast. We should not be so pessimistic and think that we’re always going to have to remain clueless of how these things work.
And if we do really understand them, does that make them a lot easier to control than they are currently?
It depends on what you want to control, but if you have a safety standard that says you have to demonstrate that it can never teach a terrorist how to build a bioweapon, well, an easy way to do that is just determine whether it knows anything about biology. And if it does, you delete that until you have a model that’s perfectly conversant about Shakespeare and airplanes and everything else, but it just doesn’t know anything about biology, so there’s no way that’s going to help make a bioweapon.
There are scenarios where knowing about biology might be a great thing.
Of course, but then maybe those will take a little longer to get FDA licensed. But you can see where I’m going with this and, and maybe you can determine that the biology is fine, but there’s certain aspects you don’t want to know. I mean, this is how you do it in an organization. If you’re the head of the CIA, and there are certain kinds of knowledge that you really want to protect, you don’t put it in the head of every employee. The best way you can guarantee that someone isn’t going to leak the information is if they don’t know it. And in the same way, there’s no reason why every LLM has to know everything. It’s much safer if you have machines that only know the stuff that you’re okay with everybody knowing and being used generically.
And then, if, if you’re selling something to Dana-Farber for cancer research or whatever, maybe they can have one that knows a lot more biology. But you probably don’t need it for your work, right? I certainly don’t need it for my work.
‘We’ve got to stop coddling the AI industry’
Do you plan to do more open letters? Are there other things that you want to provoke conversations about?
I think what we really need urgently is just national safety standards for AI. We’ve got to stop coddling the AI industry and just treat them like all other industries. It would be absurd if someone said we should close the FDA and the biotech companies not have safety standards. AI is kind of the odd one out there. For some reason they can do whatever they want, but it’s the new kid on the block. As soon as we get that, the rest will take care of itself. Because then people start innovating to meet those standards.
We’ve seen the EU in some cases have a big impact even on American tech companies.
I think the EU Act is pretty weak, but it’s definitely a good step in the right direction. There’s a lot of technical work being done now about how to implement all these things in practice, which can then easily be copied by other jurisdictions like the U.S. if they want to.
I also thought it was kind of cool how it just called the bluff on some of the companies that said that they were gonna leave the EU if this passed. They’re still here, So when they say now that, “Oh, we’re going have to leave the U.S. if the U.S. put some standards in place,” the American legislators will know that this is bluster, I can’t blame the companies for doing that, because tobacco companies did the same. And the companies always instinctively try to push back against regulation. But every successful regulation or safety standard in the past I can think of is always met with claims that the sky was falling.
Although there are all these cases of people in the AI industry saying that they want legislation to play a role.
I don’t want to judge anyone in particular, but I do find it kind of comical how some people said, “Oh, please regulate us.” And then when someone proposes regulation like the EU Act or SB 10 47 they’re like, “Oh, but not like that.” So, yeah, I think these decisions need to be made democratically. Listen to the companies, but they shouldn’t have a seat at the table when it’s decided.