On Wednesday, the Department of Justice announced the arrest of Jonathan Rinderknecht, who was federally charged for setting the blaze that eventually became the massive and deadly Palisades Fire, which ravaged coastal communities of Los Angeles County throughout the month of January.
Authorities mentioned various pieces of evidence that they said supported the case against Rinderknecht, but they spotlighted a particularly novel one: records of his exchanges with ChatGPT. The 29-year-old Florida man allegedly prompted OpenAI‘s chatbot to create images of cities and forests engulfed in flames, told it about feeling “liberated” after burning a Bible, and, shortly after igniting a brush fire just after midnight on New Year’s Day, asked it, “Are you at fault if a fire is lift [sic] because of your cigarettes?”
Rinderknecht has yet to enter a plea, but if he goes to trial, the case could offer a meaningful glimpse into how the proliferation of AI technology will affect criminal investigations and prosecutions in the future. Users have proven overly trustful of chatbots, treating them as therapists, confidants, and even romantic partners onto which they can unburden intimate secrets. The reality is that they are divulging sensitive information on platforms that offer no legal confidentiality, developed by companies that will readily turn their data over to law enforcement if served with a subpoena, warrant, or court order.
A New Frontier
All this makes chatbot records a potential bonanza for law enforcement, tech and legal experts tell Rolling Stone, perhaps more valuable than the text messages and social media footprints that currently serve as important areas of focus when authorities scrutinize individuals linked to a crime. Generative AI correspondence, too, is soon to be a regular part of criminal investigations and trials.
“There is no doubt that AI chatbot logs are going to take center stage in evidentiary disputes across the nation,” says Kyle A. Valente, an attorney with the law firm Bressler Amery & Ross PC who published a paper on the topic last month. “Civil practitioners will want to obtain chat histories to show intent or motive in cases dealing with fraud, misrepresentation, breach of contract, and the like,” he says, while prosecutors are seeking to introduce these files to establish elements of a case such as mens rea, or a suspect’s knowledge of wrongdoing.
Editor’s picks
Though a number of civil lawsuits related to user harm are currently pending against AI firms, criminal prosecutors have hardly begun to explore how they can leverage defendants’ chatbot usage to secure convictions. Valente cites one remarkable early precedent: a man from Roanoke, Virginia, was sentenced in September to 25 years in prison for first-degree murder after prosecutors argued that his messages with Snapchat‘s My AI bot demonstrated premeditation in a 2023 slaying. Before fatally shooting another man in a petty dispute, the now convicted defendant, 18 years old at the time, told the bot he was going to “fight” and asked it, “What if I shot them if they step on my property with hostile intent?” (The chatbot replied that it could not “condone or encourage any violent or illegal behavior.”)
‘It’s Like Your Diary Times 10’
“DNA residue, fingerprints, everything you watch in all these crime shows pales in comparison to the private data that you have on your phone,” says Rob T. Lee, a digital forensics and AI expert at SANS Institute, a cybersecurity training cooperative. “What happens now with AI, you’re having a full conversation with these chatbots. It is no longer just a search for a topic.” Whereas criminal suspects in this past may have been incriminated by a Google query on, say, how to clean up bloodstains, a more comprehensive, detailed back-and-forth with a tool like ChatGPT gives investigators far more opportunities to prove intent and mindset, Lee explains. “All of that is logged, and it’s all sitting on servers,” he says. “It’s like your diary times 10.”
Related Content
“We saw this with social media, where all of a sudden, law enforcement started to understand that a lot of important, relevant evidence could be gleaned from social media posts and profiles,” says Meetali Jain, a tech and human rights lawyer who serves as director of the Tech Justice Law Project and is representing multiple parents in lawsuits alleging that AI products encouraged their children to commit suicide. “That became a standard part of a criminal investigation. I think we’re going to start to see that in this context too.”
The difference, she says, is that this next development concerns “machine-generated outputs,” and there aren’t established rules around how this kind of material can or should be admitted as exhibits in court. “It’s very much a live issue,” Jain says. Valente concurs: “At this early juncture, the bounds are limitless until such a time as courts issue opinions that establish some framework for dealing with this type of evidence,” he says.
A Fundamental Difference
Vincent Conitzer, director of the Foundations of Cooperative AI Lab at Carnegie Mellon University and head of technical AI engagement at the University of Oxford’s Institute for Ethics in AI, also says that chatbot exchanges fundamentally differ from text messages with another person, who we would expect to take action to prevent a crime if told about it in advance. “Will we have similar expectations of a chatbot?” Conitzer asks. “Already the companies try to have guardrails so that their chatbots do not help someone commit a crime, though those guardrails are still very brittle.”
Indeed, an NBC News report this week revealed that ChatGPT can be manipulated into giving directions on the assembly of chemical and biological weapons. And Matthew Livelsberger, the veteran who on New Year’s Day fatally shot himself in a Tesla Cybertruck parked in front of the Trump hotel in Las Vegas right before an incendiary device blew up the vehicle, used generative AI to help plan his attack.
Lee notes that several AI companies have already indicated their ability to flag user activity that appears to foreshadow a crime. “They’re getting much better at detecting foreign adversaries using these systems for malicious ends, and I can only imagine that they’re doing similar for people [in the U.S.] that may be looking to kill or commit acts of terror,” he says.
Human Review
OpenAI has said it employs a human review team that is authorized to electively inform law enforcement when they come across content that “involves an imminent threat of serious physical harm to others.” How often they escalate to this level of alarm is unclear. “They’re not doing press releases on it,” Lee says, speculating that they prefer to spotlight efforts to disrupt state-affiliated threat actors in Russia, Iran, and China, “which is much easier to get a pat on the back for.”
The major American AI players are much more tight-lipped, Lee says, on their ability to detect a possible school shooter, to take one example. And, as Jain points out, if they are picking up these warning signs, reporting them isn’t mandatory.
“I think [the companies] are going to say, ‘Look, we can’t be responsible for what kinds of queries users pose to us through our chatbot, we can’t possibly know what he was going to do,’” Jain predicts. She is struck by the fact that when the Justice Department put forward its claims about how Rinderknecht had used ChatGPT before and during the Palisades Fire, they didn’t mention OpenAI or whether the program could have appeared to encourage “harm to others.”
Chatbots are known to validate users’ obsessions and delusions, and most of the litigation around this phenomenon has so far pertained to self-harm, as in the suits Jain has brought against OpenAI and Character Technologies. The question of the role a chatbot might come to play in an episode of outward violence or destructive behavior is murkier still.
“The focus is going to be on further substantiating the culpability of the individual, at the expense of really looking at how the companies designed their chatbots,” Jain says, likening the issue to entrapment, a scenario in which law enforcement induces someone to commit a crime. She envisions prosecutors homing in on a “gotcha moment” in a defendant’s chatbot logs, “without any sort of understanding of how the machine got to that output.”
Trending Stories
“I feel like it’s going to be a very one-sided story,” Jain adds. “I can’t imagine that a company is going to stick up for the rights of users.”
Nevertheless, it seems there’s little the major AI players can do to avoid being drawn into criminal proceedings in the months and years ahead. Valente, for one, expects that their legal and compliance departments will take a “proactive” approach to the problem. What would that entail? “Revisiting privacy policies and updating terms and conditions,” he says. As always, the devil is in the fine print.