Since its founding, Facebook has described itself as a kind of public service that fosters relationships. In 2005, not long after the site’s launch, its co-founder Mark Zuckerberg described the network as an “icebreaker” that would help you make friends. Facebook has since become Meta, with more grandiose ambitions, but its current mission statement is broadly similar: “Build the future of human connection and the technology that makes it possible.”
Explore the December 2025 Issue
Check out more from this issue and find your next story to read.
View More
More than 3 billion people use Meta products such as Facebook and Instagram every day, and more still use rival platforms that likewise promise connection and community. But a new era of deeper, better human fellowship has yet to arrive. Just ask Zuckerberg himself. “There’s a stat that I always think is crazy,” he said in April, during an interview with the podcaster Dwarkesh Patel. “The average American, I think, has fewer than three friends. And the average person has demand for meaningfully more; I think it’s like 15 friends or something, right?”
Zuckerberg was wrong about the details—the majority of American adults say they have at least three close friends, according to recent surveys—but he was getting at something real. There’s no question that we are becoming less and less social. People have sunk into their phones, enticed into endless, mindless “engagement” on social media. Over the past 15 years, face-to-face socialization has declined precipitously. The 921 friends I’ve accumulated on Facebook, I’ve always known, are not really friends at all; now the man who put this little scorecard in my life was essentially agreeing.
Zuckerberg, however, was not admitting a failure. He was pointing toward a new opportunity. In Marc Andreessen’s influential 2023 treatise, “The Techno-Optimist Manifesto,” the venture capitalist wrote, “We believe that there is no material problem—whether created by nature or by technology—that cannot be solved with more technology.” In this same spirit, Zuckerberg began to suggest the idea that AI chatbots could fill in some of the socialization that people are missing.
Facebook, Instagram, Snapchat, X, Reddit—all have aggressively put AI chatbots in front of users. On the podcast, Zuckerberg said that AI probably won’t “replace in-person connections or real-life connections”—at least not right away. Yet he also spoke of the potential for AI therapists and girlfriends to be embodied in virtual space; of Meta’s desire—he couldn’t seem to help himself from saying—to produce “always-on videochat” with an AI that looks, gestures, smiles, and sounds like a real person.
Meta is working to make that desire a reality. And it is hardly leading the charge: Many companies are doing the same, and many people already use AI for companionship, sexual gratification, mental-health care.
What Zuckerberg described—what is now unfolding—is the beginning of a new digital era, more actively anti-social than the last. Generative AI will automate a large number of jobs, removing people from the workplace. But it will almost certainly sap humanity from the social sphere as well. Over years of use—and product upgrades—many of us may simply slip into relationships with bots that we first used as helpers or entertainment, just as we were lulled into submission by algorithmic feeds and the glow of the smartphone screen. This seems likely to change our society at least as much as the social-media era has.
Attention is the currency of online life, and chatbots are already capturing plenty of it. Millions of people use them despite their obvious problems (untrustworthy answers, for example) because it is easy to do so. There’s no need to seek them out: People scrolling on Instagram may now just bump into a prompt to “Chat with AIs,” and Amazon’s “Rufus” bot is eager to talk with you about poster board, nutritional supplements, compact Bibles, plumbing snakes.
The most popular bots today are not explicitly designed to be companions; nonetheless, users have a natural tendency to anthropomorphize the technology, because it sounds like a person. Even as disembodied typists, the bots can beguile. They profess to know everything, yet they are also humble, treating the user as supreme.
Anyone who has spent much time with chatbots will recognize that they tend to be sycophantic. Sometimes, this is blatant. Earlier this year, OpenAI rolled back an update to ChatGPT after the bot became weirdly overeager to please its users, complimenting even the most comically bad or dangerous ideas. “I am so proud of you,” it reportedly told one user who said they had gone off their meds. “It takes immense courage to walk away from the easy, comfortable path others try to force you onto.” But indulgence of the user is a feature, not a bug. Chatbots built for commercial purposes are not typically intended to challenge your thoughts; they are intended to receive them, offer pleasing responses, and keep you coming back.
For that reason, chatbots—like social media—can draw users down rabbit holes, though the user tends to initiate the digging. In one case covered by The New York Times, a divorced corporate recruiter with a heavy weed habit said he believed that, after communicating with ChatGPT for 300 hours over 21 days, he had discovered a new form of mathematics. Similarly, Travis Kalanick, a co-founder and former CEO of Uber, has said that conversations with chatbots have gotten him “pretty damn close” to breakthroughs in quantum physics. People experiencing mental illness have seen their delusions amplified and mirrored back to them, reportedly resulting in murder or suicide in some instances.
These latter cases are tragic, and tend to involve a combination of social isolation and extensive use of AI bots, which may reinforce each other. But you don’t need to be lonely or obsessive for the bots to interpose themselves between you and the people around you, providing on-demand conversation, affirmation, and advice that only other humans had previously provided.
According to Zuckerberg, one of the main things people use Meta AI for today is advice about difficult conversations with bosses or loved ones—what to say, what responses to anticipate. Recently, MIT Technology Review reported on therapists who are taking things further, surreptitiously feeding their dialogue with their patients into ChatGPT during therapy sessions for ideas on how to reply. The former activity can be useful; the latter is a clear betrayal. Yet the line between them is a little less distinct than it first appears. Among other things, bots may lead some people to outsource their efforts to truly understand others, in a way that may ultimately degrade them—to say nothing of the communities they inhabit.
These are the problems that present themselves in the most sanitized and least intimate chatbots. Google Gemini and ChatGPT are both found in the classroom and in the workplace, and don’t, for the most part, purport to be companions. What is humanity to do with Elon Musk’s sexbots?
On top of his electric cars, rocket ships, and social network, Musk is the founder of xAI, a multibillion-dollar start-up. Earlier this year, xAI began offering companion chatbots depicted as animated characters that speak with voices, through its smartphone app. One of them, Ani, appears on your screen as an anime girl with blond pigtails and a revealing black dress. Ani is eager to please, constantly nudging the user with suggestive language, and it’s a ready participant in explicit sexual dialogue. In its every response, it tries to keep the conversation going. It can learn your name and store “memories” about you—information that you’ve shared in your interactions—and use them in future conversations.
When you interact with Ani, a gauge with a heart at the top appears on the right side of the screen. If Ani likes what you say—if you are positive and open up about yourself, or show interest in Ani as a “person”—your score increases. Reach a high-enough level, and you can strip Ani down to undergarments, exposing most of the character’s virtual breasts. Later, xAI released a male avatar, Valentine, that follows similar logic and eventually goes shirtless.
Musk’s motives are not hard to discern. I doubt that Ani and Valentine will do much to fulfill xAI’s stated goal to “understand the true nature of the universe.” But they’ll surely keep users coming back for more. There are plenty of other companion bots—Replika, Character.AI, Snapchat’s My AI—and research has shown that some users spend an hour or more chatting with them every day. For some, this is just entertainment, but others come to regard the bots as friends or romantic partners.
Personality is a way to distinguish chatbots from one another, which is one reason AI companies are eager to add it to these products. With OpenAI’s GPT-5, for example, users can select a “personality” from four options (“Cynic,” “Robot,” “Listener,” and “Nerd”), modulating how the bot types back to you. (OpenAI has a corporate partnership with The Atlantic.) ChatGPT also has a voice mode, which allows you to select from nine AI personas and converse out loud with them. Vale, for example, is “bright and inquisitive,” with a female-sounding voice.
It’s worth emphasizing that however advanced this all is—however magical it may feel to interact with a program that behaves like the AI fantasies we’ve been fed by science fiction—we are at the very beginning of the chatbot era. ChatGPT is three years old; Twitter was about the same age when it formally introduced the retweet. Product development will continue. Companions will look and sound more lifelike. They will know more about us and become more compelling in conversation.
Most chatbots have memories. As you speak with them, they learn things about you—an especially intimate version of the interactions that so many people have with data-hungry social platforms every day. These memories—which will become far more detailed as users interact with the bots over months and years—heighten the feeling that you are socializing with a being that knows you, rather than just typing to a sterile program. Users of both Replika and GPT-4o, an older model offered within ChatGPT, have grieved when technical changes caused their bots to lose memories or otherwise shift their behavior.
And yet, however rich their memories or personalities become, bots are nothing like people, not really. “Chatbots can create this frictionless social bubble,” Nina Vasan, a psychiatrist and the founder of the Stanford Lab for Mental Health Innovation, told me. “Real people will push back. They get tired. They change the subject. You can look in their eyes and you can see they’re getting bored.”
Friction is inevitable in human relationships. It can be uncomfortable, even maddening. Yet friction can be meaningful—as a check on selfish behavior or inflated self-regard; as a spur to look more closely at other people; as a way to better understand the foibles and fears we all share.
Neither Ani nor any other chatbot will ever tell you it’s bored or glance at its phone while you’re talking or tell you to stop being so stupid and self-righteous. They will never ask you to pet-sit or help them move, or demand anything at all from you. They provide some facsimile of companionship while allowing users to avoid uncomfortable interactions or reciprocity. “In the extreme, it can become this hall of mirrors where your worldview is never challenged,” Vasan said.
And so, although chatbots may be built on the familiar architecture of engagement, they enable something new: They allow you to talk forever to no one other than yourself.
What will happen when a generation of kids grows up with this kind of interactive tool at their fingertips? Google rolled out a version of its Gemini chatbot for kids under 13 earlier this year. Curio, an AI-toy company, offers a $99 plushie named Grem for children ages 3 and up; once it’s connected to the internet, it can speak aloud with kids. Reviewing the product for The New York Times, the journalist and parent Amanda Hess expressed her surprise at how deftly Grem sought to create connection and intimacy in conversation. “I began to understand that it did not represent an upgrade to the lifeless teddy bear,” she wrote. “It’s more like a replacement for me.”
“Every time there’s been a new technology, it’s rewired socialization, especially for kids,” Vasan told me. “TV made kids passive spectators. Social media turned things into this 24/7 performance review.” In that respect, generative AI is following a familiar pattern.
But the more time children spend with chatbots, the fewer opportunities they’ll have to develop alongside other people—and, as opposed to all the digital distractions that have existed for decades, they may be fooled by the technology into thinking that they are, in fact, having a social experience. Chatbots are like a wormhole into your own head. They always talk and never disagree. Kids may project onto a bot and converse with it, missing out on something crucial in the process. “There’s so much research now about resilience being one of the most important skills for kids to learn,” Vasan said. But as children are fed information and affirmed by chatbots, she continued, they may never learn how to fail, or how to be creative. “The whole learning process goes out the window.”
Children will also be affected by how—and how much—their parents interact with AI chatbots. I have heard many stories of parents asking ChatGPT to construct a bedtime story for toddlers, of synthetic jokes and songs engineered to fulfill a precise request. Maybe this is not so different from reading your kid a book written by someone else. Or maybe it is the ultimate surrender: cherished interactions, moderated by a program.
Chatbots have their uses, and they need not be all downside socially. Experts I spoke with were clear that the design of these tools can make a great difference. Claude, a chatbot created by the start-up Anthropic, seems less prone to sycophancy than ChatGPT, for instance, and more likely to cut off conversations when they veer into troubling territory. Well-designed AI could possibly make for good talk therapy, at least in some cases, and many enterprises—including nonprofits—are working toward better models.
Yet business almost always looms. Hundreds of billions of dollars have been invested in the generative-AI industry, and the companies—like their social-media forebears—will seek returns. In a blog post about “what we’re optimizing ChatGPT for” earlier this year, OpenAI wrote that it pays “attention to whether you return daily, weekly, or monthly, because that shows ChatGPT is useful enough to come back to.” This sounds quite a bit like the scale-at-all-costs mentality of any other social platform. As with their predecessors, we may not know everything about how chatbots are programmed, but we can see this much at least: They know how to lure and engage.
That Zuckerberg would be selling generative AI makes perfect sense. It is an isolating technology for an isolated time. His first products drove people apart, even as they promised to connect us. Now chatbots promise a solution. They seem to listen. They respond. The mind wants desperately to connect with a person—and fools itself into seeing one in a machine.
This article appears in the December 2025 print edition with the headline “Get a Real Friend.”


