OAKLAND, Calif. — At the heart of the trial pitting Elon Musk against OpenAI CEO Sam Altman is a moment when they found common cause on an ever more pressing question: how to protect humanity from the risks of artificial intelligence.
It turned sour, and the jury is charged with settling the ensuing legal dispute between the two Silicon Valley titans.
But the unresolved questions about the dangers of AI have been looming over the federal courthouse in Oakland, California, since the trial began last week. The technology itself is not on trial – the judge has warned lawyers not to get “sidetracked” by questions about its dangers – but witness testimony has touched on concerns around workforce disruptions and the prospect raised by Musk that superhuman AI might one day kill us all.
Musk, the world’s richest person, filed the case accusing his fellow OpenAI co-founder of betraying promises to keep the company as a nonprofit. Altman, in turn, accuses Musk of trying to hobble the ChatGPT maker for the benefit of his own AI company.
One witness, AI pioneer Stuart Russell, said that the “winner take all” power struggle over AI’s future is itself threatening humanity.
Musk’s lawyers brought Russell to the stand as an expert witness, at the rate of $5,000 an hour. The University of California, Berkeley computer scientist listed a host of AI dangers, from racial and gender discrimination to jobs displacement, misinformation and emotional attachments that take some AI chatbot users down a spiral of psychosis.
“Whichever company develops AGI first would have a very big advantage” and an increasingly big lead over everyone else, Russell told the court, using the initials for artificial general intelligence, a term for advanced AI technology that surpasses humans at many tasks.
The trial centers on the 2015 birth of OpenAI as a nonprofit startup primarily funded by Musk.
Both Musk and Altman, who has not yet testified in the trial, have said they wanted OpenAI to safely develop AGI for the benefit of humanity and not for any one person’s gain or under any one person’s control. And both camps allege it’s the other guy who was trying to control it.
A jury of nine people selected from the San Francisco Bay Area will get to say which one of them is telling the truth.
Early on, Judge Yvonne Gonzalez Rogers warned lawyers, particularly Musk’s, not to delve into broader AI concerns that go beyond Musk’s claims that OpenAI violated its charitable mission.
“This is not a trial on the safety risks of artificial intelligence. This is not a trial on whether or not AI has damaged humanity,” Gonzalez Rogers told lawyers before jurors arrived at the federal courthouse.
Still, Musk managed to skirt that guidance in his testimony last week. Asked to describe artificial general intelligence, Musk said it is when AI becomes “as smart as any human,” and added that “we are getting close to that point,” and AI will be smarter than any human as soon as next year.
Musk said he has “extreme concerns” about AI and has had them for a long time. Musk said he wanted a “counterpoint” to Google, which at the time had “all the money, all the computers and all the talent” for AI, with no counterbalance.
“I was concerned AI would be a double-edged sword,” he said.
During his testimony, Musk repeatedly said that he could have founded OpenAI as a for-profit company, just like the other companies he started or took over. “I deliberately chose this,” he said, “for the public good.”
The judge expressed some skepticism. In comments to lawyers last week before the jury came into the room, Gonzalez Rogers pointed out that Musk, “despite these risks, is creating a company that is in the exact same space,” referring to the billionaire’s xAI artificial intelligence company, which launched in 2023 and has since merged with Musk’s rocket company SpaceX.
OpenAI’s side also claims its goals are to benefit the public. OpenAI co-founder and president Greg Brockman, a defendant in Musk’s lawsuit along with Altman and their company, said he thought the technology OpenAI was developing was “transformative” — bigger than corporations, corporate structures and bigger than any one individual. It was, he said, “about humanity as a whole.”
Brockman testified this week that his No. 1 goal was always the “mission” of OpenAI and it was Musk who sought unilateral control over the company.
Brockman recalled a meeting where at first Musk seemed open to the idea of Altman being OpenAI’s CEO. In the end, however, “he said people needed to know he was in charge.”
In addition to damages, Musk is seeking Altman’s ouster from OpenAI’s board. If Musk wins, it could derail OpenAI’s plans for an initial public offering of its shares.
___
O’Brien reported from Providence, Rhode Island.


