Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.
This week, I’m focusing on a new court filing that sheds more light on the reasons for Sam Altman’s ouster from OpenAI two years ago. I also look at Amazon’s kerfuffle with Perplexity over AI shopping agents, and at another court ruling that using copyrighted data for AI training is fair use.
Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at sullivan@fastcompany.com, and follow me on X (formerly Twitter) @thesullivan.
Two years after OpenAI boardroom drama, a lot is riding on Altman’s trustworthiness
OpenAI CEO Sam Altman has done more than anyone else to whip up faith and trust that the next industrial revolution—AI—is imminent and inevitable. That faith and trust have already loosed hundreds of billions of investment in infrastructure needed to support the transition. Some say the infusion of cash is single-handedly propping up the U.S. stock market, and, by extension, the economy. The faith and trust have moved Washington to all but abandon its oversight role in favor of acting as enabler and cheerleader.
But questions of Altman’s trustworthiness won’t go away. Some troublesome details about Altman’s famous 2023 firing by his board (and subsequent rehiring and board reshuffle) came to light with the recent (unsealed) court filing of part of a deposition of OpenAI cofounder and ex-chief scientist Ilya Sutskever in a case brought against the company by Elon Musk.
At the time of Altman’s ouster, the board said that he had kept key facts about the business from them. The board had also considered reports that Altman undermined his executives and pitted them against each other. Sutskever confirmed to attorneys during the seven-hour deposition that he believes Altman lied habitually. He testified that Altman had been pitting Mira Murati, the CTO at the time, against Daniela Amodei, who eventually left with her brother Dario Amodei and others to form Anthropic.
We learn that Altman’s alleged behavior wasn’t short-term or a reaction to a crisis, but part of a pattern. Sutskever said he and fellow board member Murati had been documenting Altman’s indiscretions and preparing to oust him for more than a year before proposing it to the board. (They delayed the firing until Altman loyalists on the board were too few to stop it, Sutskever said.)
One board member, Helen Toner, said a year after departing that OpenAI executives (likely Sutskever and Murati) began talking to the board about the Altman problems in the month before the November 2023 dustup. “The two of them suddenly started telling us . . .how they couldn’t trust him, about the toxic atmosphere he was creating,” Toner said during a TED AI podcast. “They used the phrase ‘psychological abuse,’ telling us they didn’t think he was the right person to lead the company to AGI, telling us they had no belief that he could or would change.”
Sutskever, in fact, wrote a 52-page-long memo describing Altman’s indiscretions (at the request of fellow board member Adam D’Angelo, and possibly board members Helen Toner and Tasha McCauley). He wrote another memo about then-president and board chair Greg Brockman, who resigned after Altman was fired.
Toner has offered other examples of Altman’s lies of omission, including a failure to tell the board about plans to launch ChatGPT, or that he personally owned the OpenAI startup fund “even though he constantly was claiming to be an independent board member with no financial interest in the company,” Toner said. Toner added that Altman gave the board inaccurate information about the “small number of formal safety processes” OpenAI had in place, so the board had no way of knowing how well those safety processes were working. (Toner is an AI safety expert.)
People say that political infighting happens within every company. That’s probably true. People say that CEOs are like politicians; they have to balance competing priorities and personalities within the company, so a certain amount of “finessing” of the truth is expected. I’ll buy that too.
And the context is important. OpenAI’s history, and the recent history of generative AI, had a lot to do with setting up the conflict. OpenAI started out as an idealistic little AI lab, but a few years later it made a breakthrough discovery that AI models got predictably smarter as they were supersized and given massive amounts of computing power. Developing frontier AI models became a very expensive undertaking, requiring massive capital. OpenAI had to spend massively to maintain its lead in the frontier model arms race that ensued, and needed consumer and enterprise revenue streams to help pay for it. (CFO Sarah Friar said Wednesday that OpenAI may look to the government to guarantee its infrastructure loans.)
It’s not easy to run a business like a nonprofit in that situation. Yet Altman was answering to a nonprofit board of directors. Toner said as much on the TED AI podcast. “The board is a nonprofit board that was set up explicitly for the purpose of making sure that the company’s public good mission was primary, was coming first—over profits, investor interests, and other things,” Toner said on the podcast. Maybe something had to give.
But . . .
But if the CEO was (or is) hiding truths from the board, something is wrong. Given the potential risks of AI, it’s disturbing that one of Altman’s lies of omission, according to Toner, concerned safety measures. Superhuman AI doesn’t care about the corporate structure of its creators. If not responsibly aligned and governed, its potential for doing harm is the same.
Amazon to Perplexity: ‘Keep your agents out of our market’
Amazon is apparently not ready for the AI agent revolution. Amazon accused Perplexity of computer fraud after the AI company’s Comet browser allowed users to search for and purchase items on Amazon’s platform. Amazon believes Perplexity needs permission from the e-commerce giant to let users do that. Its attorneys sent Perplexity CEO Aravind Srinivas a cease-and-desist, saying, in effect, that the Comet shopping agents are no longer welcome on Amazon.
We’re in the early innings of AI agents. Some of the first consumer agents, Perplexity included, can navigate e-commerce websites and even make purchases. In the future agents may routinely do our business by interacting with other agents using a secure agent-to-agent interface—no need for a traditional web interface at all.
Perplexity says Amazon sent an “aggressive legal threat” via a cease-and-desist letter dated October 31, demanding the company stop enabling purchases through its Comet Assistant. Amazon’s lawyers say that Perplexity lacks authorization to access Amazon user accounts or account details using what they described as “disguised or obscured” AI agents. Amazon has already taken steps in recent months to block external AI agents from OpenAI, Google, Meta, and others from crawling product information at its website.
Perplexity accused Amazon of “bullying,” and argued that a tool that makes shopping easier for the consumer can only benefit the e-commerce giant. Perplexity suggested that Amazon is more focused on manipulating shopper decisions by showing ads, injecting upsells and confusing offers, and pushing sponsored products in search results. Amazon says Perplexity’s agents hurt shoppers by skipping over personalized product recommendations, and potentially not displaying the fastest available delivery speeds for customers. Amazon and Perplexity did not respond to a request for comment.
In theory, Amazon could change its terms of service to more explicitly ban third-party shopping agents from its site. But what if such agents create real value (time savings) for consumers? Can Amazon easily ban some agents but not others?
U.K. court says AI companies can use copyrighted material to train models
The AI industry has notched another legal win for its practice of scraping copyrighted digital content from the web and using it to train AI models. Getty Images filed suit in the High Court of Justice of England and Wales, claiming that Stability AI violated copyright when it downloaded millions of Getty photos without permission for the purpose of training its Stable Diffusion image generator. Judge Joanna Smith ruled this week that since the Stable Diffusion model didn’t store or reproduce the Getty images it can’t be said to have “copied” the images under U.K. copyright law. The court also declared that Getty would have to drop the copyright claim in the U.K. court because the training didn’t physically happen within its jurisdiction. Getty also filed its complaint in the U.S., in the Southern District of New York, but that trial is still ongoing.
Neil Chilson, former chief technologist for the FTC and currently head of AI Policy with the Abundance Institute, called the decision “consistent with the nature of the technology and a successful result for continued AI innovation.”
More AI coverage from Fast Company:
Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.
The early-rate deadline for Fast Company’s World Changing Ideas Awards is Friday, November 14, at 11:59 p.m. PT. Apply today.


