- Key insight: The CEO of the largest U.S. bank spoke of the cybersecurity risks presented by advanced AI models at an Anthropic event.
- Expert quote: It’s “very heightened risk.” Anthropic did the right thing by “giving people a chance to study it, understand the vulnerabilities, come up with plans so we can handle it,” said Jamie Dimon, CEO of JPMorganChase.
- Forward look: Generative AI and more traditional software are likely to coexist in the future, Dimon said.
After playing down the cybersecurity risks of advanced AI models during
Processing Content
The
It’s “very high risk,” Dimon said at Tuesday’s event in New York. Anthropic did the right thing, he said, by “giving people a chance to study it, understand the vulnerabilities, come up with plans so we can handle it.”
During the bank’s earnings call, Dimon noted that Mythos “does create additional vulnerabilities.” He also said “
“The risks are very real,” said Anthropic CEO Dario Amodei, who spoke on the same panel, which was moderated by Andrew Ross Sorkin. “This is why we took the actions we did,” he said, referring to Anthropic’s decision to limit Mythos access to 12 large companies that joined Project Glasswing, and to share his concerns with the federal government.
Three months ago, with the model that preceded Mythos, Anthropic found about 30 vulnerabilities in Firefox. “With Mythos, we found almost 300 vulnerabilities in Firefox,” Amodei said.
Amodei had a good meeting with Treasury Secretary Scott Bessant about the cyber risks from Mythos, he said.
“The Secretary is getting very good advice from people across the banks and financial services,” he said. “I think they’re taking the risks seriously, and I really commend them for what they’re doing. But because this is emerging so quickly, I think there’s inherently a scramble to figure it out. And what we have now is a bit of an ad hoc process, something that almost mimics regulation, but lacks consistency.”
Other AI companies are about three months behind Anthropic, while Chinese models are six to 12 months behind, Amodei said.
“So I think we have roughly that amount of time to fix all these vulnerabilities,” he said. “We’ve identified these tens of thousands of vulnerabilities. The reason we haven’t announced many of them is, only a small fraction have been fixed. If we announce something without it being fixed, then the bad guys will exploit it. And so the danger is some enormous increase in the amount of vulnerabilities in the amount of breaches, in the financial damage that’s done from ransomware on schools, hospitals, not to mention banks, financial accounts.”
Amodei argued that Mythos can be used to write code that’s inherently more secure by design.
“So I think there’s a better world on the other side of this,” he said. “This isn’t just about fear. This is about a moment of danger where if we respond to it correctly, and I think we started to take the first steps, then we can have a better world on the other side of it.”
Amodei brushed off concerns that there aren’t enough computing resources for the government and private-sector companies to simultaneously use Mythos to find all software vulnerabilities.
“That’s actually a straight misconception,” Amodei said. “Mythos consumes a very tiny fraction of Anthropic compute. It’s no issue whatsoever to increase the amount of compute by 3x or 10x.”
Amodei unsurprisingly pushed back on the idea of having a government agency approve new AI models, much like the Food and Drug Administration does.
“Whenever there’s an emerging technology, we’ve got to get the balance just right,” he said. “We don’t want a Wild West where you can just do anything. The FDA slows down medical progress a lot. That’s a cautionary tale in the other direction. We need to, little by little, grope our way to some process that lets the industry operate expeditiously, but puts guardrails on the most serious things.”
Asked about what the future will look like, Amodei promised Anthropic’s models’ error rates will go down, and its agents will become more autonomous and faster.
Dimon noted that his bank started using AI in 2012 to work on mortgage problems. “I’d call it more advanced math,” he said. “It was machine learning, where we noticed patterns and ran through lots of data that a human being couldn’t do, and notice those patterns.”
“And it’s just starting,” Dimon said.
Asked about whether incumbent software companies will go out of business as generative AI models become more popular, Dimon said both incumbent software companies and gen AI models are likely to survive, since the existing software companies are adding AI agents to their programs.
“I think it’ll be a little bit of both,” he said.


