This is an audio transcript of the Tech Tonic podcast episode: ‘Superintelligent AI — The Utopians’
Madhumita Murgia
Let me tell you about Claude. Claude describes themselves as helpful, harmless and honest. They can tell a joke. They can write you an essay, write poems, draw up a business plan. Claude’s really, really useful to have around.
Jack Clark
If I asked Claude to do something, Claude goes away and comes back with some interesting responses.
Madhumita Murgia
And that’s Jack Clark. He’s one of the co-founders of Anthropic, the AI company that created Claude. As you may have guessed, Claude is a chatbot, one of many in the wave of AI systems that have totally changed the way that people think about artificial intelligence in the last year.
Jack Clark
So I think the reason why everyone’s become so obsessed about AI is that for many years, getting language models to do anything useful was kind of like a parlour trick that only a small number of experts could do. But only recently did it kind of break through this barrier, from science curiosity to wow, this is incredibly useful and also easy for me to use as someone who has no familiarity with the technology.
Madhumita Murgia
But the thing about AI systems like ChatGPT and Claude is that they sometimes do things that nobody expected.
Jack Clark
Language models for years have not really had a sense of humour. Humour’s obviously quite a startling and shocking thing. And I remember one day at Anthropic, a new model came up for the production line and someone said, Claude can tell jokes now. And then we all got quite excited and discovered Claude had now gained this ability to show some form of humour, which was making us all chuckle.
Madhumita Murgia
Now you might not think that your chatbot unexpectedly telling jokes sounds too worrying. But what if your chatbot started developing abilities that you really didn’t want it to have?
Jack Clark
More recently, we tried to look at how well Claude could be used for a misuse case. In our case it was bioweapons and we discovered that Claude was more capable than we’d thought.
Madhumita Murgia
It turned out that as well as having a sense of humour, Claude was also very good at telling you how to build a bioweapon. The company is cagey about exactly what kind of weapon Claude was able to unearth, but Clark told me that Anthropic considered it a national security issue — which begs the question, if even the AI’s creators are surprised by the skills it picks up, if even they are alarmed by the harm that it could do, why are they building it at all?
Jack Clark
I think of it a little like we’re in the 17th century and someone dropped a petrol-powered vehicle in a field. It has petrol in it and the key’s in it so we can drive it, but we don’t really know what makes it go.
[MUSIC PLAYING]
Madhumita Murgia
This is Tech Tonic from the Financial Times. I’m Madhumita Murgia.
John Thornhill
And I’m John Thornhill. Over the last year, rapid developments in artificial intelligence have led to fears about the existential risks it poses. So in this season of Tech Tonic, we’re asking whether we’re really getting closer to reaching superintelligent AI, and if so, how worried we should be.
Madhumita Murgia
In this episode: what do the multibillion dollar companies building human-level AI really want? And what kind of vision of the future are they putting forward?
John Thornhill
So let’s talk about some of the companies that are dominating this field of AI. Who are the major companies leading this field?
Madhumita Murgia
So really leading the pack, I think, at the moment is OpenAI, which was founded by Sam Altman, funded originally by Elon Musk, although now its biggest investor is Microsoft. There’s of course also Google, which owns DeepMind. Meta has a team that’s working on it. And these are really the dominant companies in the space today. We also now have some of the big tech companies in China working to develop AI really quickly. And then you have a range of start-ups all across the world that are coming into the fray to now challenge these bigger fish.
John Thornhill
Where does Anthropic fit into this picture?
Madhumita Murgia
So Anthropic is one of the start-ups, but they’re incredibly well-funded. And they’re also particularly interesting because it was founded by three researchers, including Jack Clark whom we’ve spoken to, who used to work at OpenAI but they decided to part ways with the company, so formed Anthropic as a breakaway. They haven’t been very explicit about the reasons for the split, but they have intimated that they wanted to build something that designed safety at the heart of AI systems, which they clearly didn’t feel OpenAI was doing in the way that they visualised.
John Thornhill
And what are OpenAI and DeepMind and Anthropic promising AI will be able to do if everything goes well?
Madhumita Murgia
So the vision is pretty utopian. The idea is that something with general intelligence would be able to solve these intractable problems that we’ve been grappling with in areas of climate change or energy use or medicine, for example, but also in the nearer term that it will be able to do a much wider range of general tasks compared to the chatbots that we have today. Here’s Anthropic’s Jack Clark again. We heard from him at the start of this episode.
Jack Clark
I think the direction of travel is that over time you’ve seen AI systems go from being specialised and built for very specific tasks to being increasingly general. You have a single text interface that can do translation, storytelling, code writing, analysis of scientific documents. And these systems are also beginning to be able to reason about images and audio as well.
Madhumita Murgia
And for this reason, he calls it an everything machine. The idea would be that you eventually have this multipurpose system doing tasks end to end. You could eventually have an AI running a business.
Jack Clark
If you have systems that are generally intelligent and able to do a broad variety of things, I could run a T-shirt company and I talk to my AI system and it handles logistics and shipping and customer service and bookkeeping and everything else.
Madhumita Murgia
It’s easy to see how this sort of everything machine could be incredibly useful. It could positively transform the world of work and the way our economy functions as a whole. It could potentially speed up any boring task you’ve ever had to do — or maybe just eradicate the boring tasks altogether. At the same time, it’s exactly that sort of generalised AI system that could do real damage.
Jack Clark
Because the challenge of an everything machine is that an everything machine can do everything. And so that’s going to encompass a range of potential misuses or harms which you need to build techniques for ensuring, you know, don’t come to pass.
Madhumita Murgia
So for example, to go back to the point Clark made earlier, an everything machine might be able to come up with chemical ingredients needed to make a bioweapon. It could cause havoc.
John Thornhill
It’s worth remembering, I suppose, why we can’t control these systems. They’re basically black boxes.
Madhumita Murgia
Yeah. All of this is really hard to guard against because the inner workings of these programs like ChatGPT and Claude, they remain a kind of mystery. These AI chatbots are trained on tonnes and tonnes and tonnes of data taken from the internet mainly, and humans can make tweaks to what information goes in and how it’s weighted, but they don’t have that much control over what comes out. Clark says that Anthropic is trying to change this. The first approach is to try to look inside the machine.
Jack Clark
And so we’ve done a huge amount of work on a research program called mechanistic interpretability, which you can think of as being like sticking an AI system in an MRI machine. And when the AI system is operating, you’re looking at what parts of it are lighting up inside the machine and how those relate to the behaviour of the system.
Madhumita Murgia
The second thing Anthropic is doing, just like all its competitors, is trying to make AI safer by implanting some explicit values directly into their software.
Jack Clark
Our system Claude uses an approach called Constitutional AI, which sees us at Anthropic write a literal constitution for the system. The constitution is made up of things like the UN Declaration of Human Rights and, funnily enough, Apple’s terms of service and a few other things. And that lets our system have a slightly greater degree of security and safety with regard to adhering to those principles. And we’ve made the principles clear. So when we talk to policymakers and they say, what are the values of your system, we can say, I’m glad you asked. It’s this constitution plus some combination with the user in the interaction.
John Thornhill
But Madhu, clearly, these guardrails still don’t get around the problem you talked about at the beginning of the episode. Claude is designed to align with human values, but it still comes up with some nefarious uses. It’s already shown it’s capable of instructing people on how to produce agents of chemical warfare, for example.
Madhumita Murgia
Right, which is why I asked Clark, why build these systems at all?
Jack Clark
Well, it’s a great question, and it’s the right question to ask. Another way you should think about this is, why build an incredibly good teacher if the teacher taught a really bad person that does harm? And just to sort of push on your question, teachers are incredibly useful and they have a huge societal benefit. How do you stop teachers and teaching tools teach so-called, you know, bad people or enable bad people? And the answer there is you use a lot of the existing societal infrastructure, ranging from the law to institutions to various forms of checks, to mitigate that potential downside because the benefits are so, so significant.
John Thornhill
So basically he’s saying it’s worth developing this everything machine, despite the risks and despite the existential threats it might pose?
Madhumita Murgia
Correct. Jack acknowledges that there are real risks, but this is why Anthropic has been quite vocal about calling for government intervention. They feel that regulation could mitigate some of those risks. Now, I should say that this means that they’re lobbying for the type of regulation that they do want. And increasingly, it looks like the next step in the AI debate will be about how we regulate this technology.
[MUSIC PLAYING]
John Thornhill
So, Madhu, we’ve been talking about how an everything machine — an artificial general intelligence that can do everything a human can and more — might pose existential risks. And companies like Anthropic are talking about regulation to make safer AI. Still, there’s a lot of debate about what this regulation should look like.
Madhumita Murgia
Right. So I called up someone called Dan Hendrycks about this. He’s the founder of the Center for AI Safety. They’re this independent think-tank out of California. And Dan spends his days thinking about how these harmful AI situations might play out. Like AI used in the workplace, you can imagine a scenario where that might go wrong.
Dan Hendrycks
Over time, people notice that AIs are doing these tasks more quickly and effectively than any human could. So it’s convenient to give them more jobs with less and less supervision. Eventually, you may reach the point where we have AI CEOs — they’re running companies because they’re much more efficient than other people. There’s willingness to do this sort of thing. For instance, the large Chinese video game company NetDragon Websoft announced that they’re interested in having an AI CEO. If we start giving a lot of the decision-making power to these AI systems, humans are having less and less influence. Competitive pressures would accelerate the expansion of AI use. Other companies, which would be faced with the prospect of being outcompeted, would feel compelled to follow suit just to keep up. So I’m concerned about them getting the power voluntarily and then humans becoming something more like a second-class species where AIs are basically running the show.
Madhumita Murgia
In fact, we’re already seeing a version of this competitive push happening within the AI industry itself, even when tech companies are insisting that safety is something they’re worried about.
Dan Hendrycks
The issue is that even if they think this is a big concern, unfortunately, what drives a lot of their behaviour is that they need to race to build AI more quickly than other people. It’s kind of like with nuclear weapons. Nobody wants, you know, thousands upon thousands of nuclear weapons. We’d all prefer a world without them. But each country is incentivised to build up a nuclear stockpile.
Madhumita Murgia
Dan says that the AI genie is out of the bottle and there’s no way to put it back inside. But he believes that governments can manage the risk. He’s working on policy to counter potential AI harms. An easy one might be focusing on the computer chips that make training these systems possible.
Dan Hendrycks
For instance, with malicious use, you could imagine doing something further like export controls of chips, you know, keeping track of where are these chips going. So some type of compute governance could be fairly important for making sure that chips don’t fall into the hands of, say, rogue states or, like, terrorist groups.
Madhumita Murgia
Another vision for an AI future might be one where the onus is on the companies themselves to deal with the mess. At the moment, a company like OpenAI or Anthropic isn’t legally liable if someone spreads spammy messages using their chatbot, or even worse, if someone makes mustard gas with it. Hendrycks thinks that should probably change.
Dan Hendrycks
Legal liabilities for AI developers seems very sensible. If Apple develops a new iPhone, they have to submit that for review before it can be delivered to a mass market. There’s no such thing for AI. It seems like a fairly basic request for a technology that’s becoming this societally relevant.
John Thornhill
I can see how this would be very complex at an international level. It requires an extraordinary amount of understanding and co-ordination from government leaders to regulate AI globally. And in the UK, Prime Minister Rishi Sunak tried to do as much at the Bletchley Park AI Safety Summit recently. And it was a pretty unique thing to see US officials sitting alongside Chinese officials discussing regulation.
Madhumita Murgia
But we should say that not everyone is so keen on the regulation of AI. In the previous episode of this series, we heard from Yann LeCun, the Meta AI scientist and one of the pioneers of artificial intelligence.
John Thornhill
And a great enthusiast of artificial general intelligence. Certainly not a Doomer.
Madhumita Murgia
Exactly. LeCun thinks advances in this technology could be massively beneficial, and he thinks that the claims about the existential risks of AI are preposterous.
Yann LeCun
Today’s technology are trained with data that is publicly accessible on the internet, and those systems currently are not really capable of inventing anything. So they’re not gonna tell you how to build a bioweapon in ways that you can’t already do by using a search engine for a few minutes.
John Thornhill
In other words, if someone really wanted to build a chemical weapon, for example, they can already do so with a Google search. So why are we getting so worked up about the potential for AI to spew out that information? But there’s another, more principled objection that LeCun has about companies that are calling for government intervention, particularly when it comes to regulation that would limit the advancement of the underlying technology.
Yann LeCun
I think regulating research and development in AI is incredibly counterproductive. There is kind of this idea somehow, which for some people stems from a bit of a superiority complex that say, oh, you know, it’s OK if we do AI because we know what we’re doing, but it’s not OK for everyone to have access to it because people can’t be trusted. And I think that’s incredibly arrogant.
John Thornhill
LeCun is worried that leading AI companies are going to be too controlling and paternalistic with this revolutionary technology. Part of this has to do with the fact that AI is becoming increasingly closed off.
OpenAI, Anthropic and DeepMind all keep their systems highly secretive. We don’t even know what training data they use to build these models. Now, these companies believe that secrecy is necessary to prevent potential misuse. But Meta and LeCun himself are big proponents of what are called open-source AI models. That means other researchers can use the underlying systems to develop their own AI products.
Yann LeCun
I mean, the reason why we have the internet today is because the internet runs on open-source software, and it’s not because companies didn’t want closed platforms for various reasons including security. A closed version of the internet would be easier to protect against cyber attacks. But that would be throwing the baby with the bathwater. And in the end, the sort of decentralised open platform that the internet is today won out.
John Thornhill
So LeCun thinks that AI should follow the open-source principles that helped grow the early Internet. And he’s sceptical about the existential threat posed by AI. He’s not the only one to be doubtful about those hypothetical long-term risks.
Emily Bender is a professor of computational linguistics at the University of Washington who writes frequently about AI. She agrees rapid developments in the technology pose risks, but it’s not existential risk she’s worried about. In fact, she thinks that all the focus and spending of the big tech companies on existential risk are a big distraction from more immediate problems.
Emily Bender
So I can’t talk about whether it’s deliberate or not, but certainly it’s beneficial to them in that way to have the attention focused on these fake fantasy scenarios of existential risk.
John Thornhill
Is it not worth at least putting a small amount of money into the possibility that these AI systems could become so powerful that they endanger humanity?
Emily Bender
It would be extremely low on my list of priorities. I can think of probably 100 things if I sat here that are not getting funded right now, that would be much better uses of that money.
John Thornhill
The issues that Bender is worried about include synthetic media or deepfakes, like a fabricated video of a politician, which is already possible using AI tech.
News clips
Experts say that women are subject to the majority of deepfake crimes . . . It’s the doubt that is cast on authentic video and audio . . .
John Thornhill
She’s also highlighted longstanding issues with automatic decision-making systems, the kind of AI programs used by governments to decide who gets welfare benefits . . .
News clip
Parliamentary probe found that tax officials wrongly accused some 10,000 families of fraud over childcare subsidies . . .
John Thornhill
Or by health services to decide who gets an organ transplant.
News clip
Significant racial bias in an algorithm used by hospitals across the nation . . .
John Thornhill
We’ve already seen high-profile cases where this technology has been damaging and discriminatory. And Bender says those concerns are being brushed aside.
Emily Bender
I think it’s about keeping the people in the picture, thinking about who’s being impacted in terms of having social benefits taken away by our bad decision system, in terms of having non-consensual porn being made about them through a text-image system, or going all the way back to 2013 when Professor Latanya Sweeney documented how, if you type in an African-American-sounding name in a Google search, there was this one company that was advertising background checks, and it would say things like has so-and-so been arrested way more frequently for African-American-sounding names than for white European-sounding names. What’s going on there? Well, there’s a reproduction of biases that has an immediate impact on people. If you imagine someone is applying for a job and somebody searches them on Google and gets the suggestion that maybe this person is dangerous, that can have an impact on someone’s career.
John Thornhill
Bender says bias, discrimination and societal inequity are the areas we need to regulate. And that’s very different from what the big AI companies are proposing.
Emily Bender
We need regulators to step up to protect rights. I think they should prioritise input from people who are affected by these systems over the ideas of the people who are building these systems. Sometimes there’s a trope that only the people building it understand it well enough to regulate it. And that is completely misguided because regulations need to look at the impact on society of the system and not the inner workings of the system.
Madhumita Murgia
So, John, what do you make of Emily Bender’s argument?
John Thornhill
Well, as she described so eloquently, I think there are immediate concerns that we have with the use of AI that regulators need to address. But where I differ from her, I think, is that I think it’s worth considering some of these bigger, longer-term existential risks, which I think can be real issues. What do you think about that?
Madhumita Murgia
Yeah, I think, you know, companies are focused on existential risk. Some might say it’s a convenient way to distract to avoid looking at these problems, but I think it’s because, you know, they’re fundamentally research organisations and the existential risk is still an open research question, which is why they’re interested in that. I think the more immediate risks we can already regulate within the agencies and the infrastructure we have to regulate the rest of technology and industry today, for example, in medicine or in the financial services. You know, we could use narrow regulation to address the immediate risks. We don’t really need AI companies to help us figure that out.
John Thornhill
It’s interesting to think about where this regulation is gonna go. And there are clearly a number of countries that are now getting very serious about regulation. I think the Chinese are in the lead on this and are really cracking down in some areas in the use of AI. Ironically, one of the places where the legislation will take the longest to introduce is the UK, which held the Bletchley Park conference. It doesn’t have such specific plans for regulating AI in the way that other countries are now doing. Should we be worried, do you think, by the fact that the industry is having such a strong say in the regulation?
Madhumita Murgia
Well, I’d say it’s not new. I think there’s always been regulatory capture, as we call it, in all different areas from, you know, food and drugs to tobacco and advertising and so on. And so, you know, the tech companies aren’t unique in trying to influence and have a say in the rules that will govern them. But I do think that they hold a lot of concentration of power that is unique, particularly when it comes to knowledge and resources in this space, because there’s just such little academic, independent research that’s happening on the cutting edge of AI development because it does seem to require so much money, infrastructure and chips and so on. The frontier-level research currently is being done inside these closed for-profit companies largely, and so they hold all the knowledge that comes along with that. And I think that is quite concerning.
John Thornhill
And several of those companies have an explicit mission to achieve artificial general intelligence. And that gives the sense, I think, that human-level AI is inevitable. But there’s a question of whether we might be all wrong about that. What if we’re overestimating whether we can attain artificial general intelligence?
Emily Bender
We are being fooled by our own ability to interpret the language into thinking there’s more there than there is. And I don’t blame people who encountered it. I put the blame with OpenAI who are overselling the technology and saying it’s something that it isn’t.
John Thornhill
More from Emily Bender next time here on Tech Tonic from the Financial Times.
Our senior producer is Edwin Lane. The producer is Josh Gabert-Doyon. Manuela Saragosa is executive producer. Sound design and engineering by Samantha Giovinco and Breen Turner. Original music by Metaphor Music. The FT’s global head of audio is Cheryl Brumley.
Madhumita Murgia
This is the second episode in this season of Tech Tonic on Superintelligent AI. We’ll be back over the next four weeks with more. Get every episode as it lands by subscribing to Tech Tonic on your usual podcast platform. And in the meantime, we’ve made some articles free to read on FT.com, including my recent magazine piece on the NHS algorithm that decides who receives organ transplants. Just follow the links in the show notes.
[MUSIC PLAYING]