Leveraging the new data that OpenAI provided about ChatGPT users and mental health is a vital step forward for societal use of AI.
getty
In today’s column, I closely examine a new set of data that OpenAI has released about the percentages associated with ChatGPT users experiencing a form of mental health distress or emergency during their interactions with the popular AI.
I have repeatedly urged AI makers to provide statistics regarding such weighty matters, enabling society to understand the nature and frequency of these occurrences. I call upon all the major AI makers to do so. Society is pretty much in the dark regarding population-level impacts. The popular LLMs tend to be proprietary; thus, there isn’t a straightforward way to fully gauge the extent of AI-related mental health encounters by users.
In the case of ChatGPT, OpenAI has previously noted that they have approximately 800 million weekly active users overall. By applying the newly released percentages of those detected as having a mental health consideration while using the LLM, we can explore a semblance of population-level impacts.
Let’s talk about it.
This analysis of AI breakthroughs is part of my ongoing Forbes column coverage on the latest in AI, including identifying and explaining various impactful AI complexities (see the link here).
AI And Mental Health Therapy
As a quick background, I’ve been extensively covering and analyzing a myriad of facets regarding the advent of modern-era AI that produces mental health advice and performs AI-driven therapy. This rising use of AI has principally been spurred by the evolving advances and widespread adoption of generative AI. For a quick summary of some of my posted columns on this evolving topic, see the link here, which briefly recaps about forty of the over one hundred column postings that I’ve made on the subject.
There is little doubt that this is a rapidly developing field and that there are tremendous upsides to be had, but at the same time, regrettably, hidden risks and outright gotchas come into these endeavors too. I frequently speak up about these pressing matters, including in an appearance last year on an episode of CBS’s 60 Minutes, see the link here.
Trying To Gauge Magnitude
OpenAI’s CEO, Sam Altman, stated in August that he believed there were fewer than 1% of ChatGPT users who were having an “unhealthy” relationship with the AI. In my coverage at the link here, I noted that this seemed to suggest that 1% was considered an upper bound and that the count was acknowledged as being non-zero. Various commentaries in the media noted that the percentage appeared to be an ad hoc hunch rather than based on quantifiable data.
Having an unhealthy relationship with AI is a somewhat nebulous phrase and could encompass a wide array of AI uses. I typically categorize those types of adverse human-AI relationships into six major groupings: (1) overdependence on AI, (2) social substitution of AI, (3) emotional over-attachment of AI, (4) compulsive usage of AI, (5) validation-seeking from AI, and (6) delusional identification with AI. For details on how AI can serve as a co-collaborator in guiding humans toward delusional thinking, see my discussion at the link here.
You might be aware that there is a rising concern that users of AI could stridently fall into a form of psychosis, often informally labeled as AI psychosis. Since there isn’t yet a formal definition of AI psychosis, I have been using my drafted strawman definition for the time being:
- AI Psychosis (my definition) – “An adverse mental condition involving the development of distorted thoughts, beliefs, and potentially concomitant behaviors as a result of conversational engagement with AI such as generative AI and LLMs, often arising especially after prolonged and maladaptive discourse with AI. A person exhibiting this condition will typically have great difficulty in differentiating what is real from what is not real. One or more symptoms can be telltale clues of this malady and customarily involve a collective connected set.” For more details about this strawman, see the link here.
The above background sets the stage for the latest insights on these sobering matters.
OpenAI Provides New Info
In an online posting by OpenAI on October 27, 2025, entitled “Strengthening ChatGPT’s Responses In Sensitive Conversations,” these salient points were made (excerpts):
- “We recently updated ChatGPT’s default model to better recognize and support people in moments of distress.”
- “We’ve taught the model to better recognize distress, de-escalate conversations, and guide people toward professional care when appropriate. We’ve also expanded access to crisis hotlines, re-routed sensitive conversations originating from other models to safer models, and added gentle reminders to take breaks during long sessions.”
- “… our initial analysis estimates that around 0.07% of users active in a given week and 0.01% of messages indicate possible signs of mental health emergencies related to psychosis or mania.”
- “… our initial analysis estimates that around 0.15% of users active in a given week have conversations that include explicit indicators of potential suicidal planning or intent and 0.05% of messages contain explicit or implicit indicators of suicidal ideation or intent.”
- “… our initial analysis estimates that around 0.15% of users active in a given week and 0.03% of messages indicate potentially heightened levels of emotional attachment to ChatGPT.”
If you are directly interested in the topic of AI and mental health, you should consider reading the entirety of the OpenAI blog posting – I’m going to focus on selected aspects and don’t have the space here to cover the entire blog. No worries, since I will be covering other elements of the OpenAI blog in several upcoming postings.
You should also take a look at the updated system card for GPT-5, which OpenAI indicates: “We are publishing a related blog post that gives more information about this work, and this addendum to the GPT-5 system card to share baseline safety evaluations. These evaluations compare the August 15 version of ChatGPT’s default model, also known as GPT-5 Instant, to the updated one launched October 3.” The document briefly depicts the latest adjustments and nuances associated with trying to put in place AI safeguards to detect mental health concerns.
Converting Percentages To Counts
I will gingerly use the cited percentages by multiplying them by the already commonly reported statistic that there are 800 million weekly active users of ChatGPT. I will also use the three categories that were identified in the OpenAI blog and then add them with caution.
- (1) Psychosis or mania (ChatGPT usage): 560,000 users based on 800,000,000 weekly active users x 0.07%.
- (2) Self-harm (ChatGPT usage): 1,200,000 users based on 800,000,000 weekly active users x 0.15%.
- (3) Emotional attachment (ChatGPT usage): 1,200,000 users based on 800,000,000 weekly active users x 0.15%.
- (4) All three categories added up (ChatGPT usage): 2,960,000 users based on adding together the 560K, 1.2M, and 1.2M of the above.
For the sake of discussion, let’s cautiously agree to add up the three counts, arriving at the total of 2,960,000 (that’s 560K + 1.2M +1.2M), which could be rounded to 3 million people. This addition is a bit problematic because we don’t know that each such person was only labeled in one of the three categories. There is likely some overlap, and in that case, we would need to deduplicate the count accordingly.
Doing The Same On Larger Scale
Before we start to analyze the calculated counts about ChatGPT usage, we can take a more macroscopic perspective and do a likewise calculation across the board for the major LLMs.
Please know that estimates vary considerably about how many weekly active users there are across the likes of Anthropic Claude, Google Gemini, xAI Grok, Meta Llama, and so on. One popular estimate that floats around quite a bit is that there are 1.5 billion weekly active users for all of the major AI players, which includes ChatGPT and GPT-5. Personally, I think that’s a low count. It is my guess that the number is much bigger.
Anyway, let’s use the commonly floated 1.5 billion as a plug-in for the sake of discussion:
- (1) Psychosis or mania (all major AIs): 1,050,000 users based on 1,500,000,000 weekly active users x 0.07%.
- (2) Self-harm (all major AIs): 2,250,000 users based on 1,500,000,000 weekly active users x 0.15%.
- (3) Emotional attachment (all major AIs): 2,250,000 users based on 1,500,000,000 weekly active users x 0.15%.
- (4) All three categories added up (all major AIs): 5,550,000 users based on adding together the 560K, 1.2M, and 1.2M of the above.
In the macroscopic perspective, there might be around 5.5 million weekly active users of generative AI who are experiencing one of the three categories of mental health conditions. The issue of deduplication is once again a caveat. Indeed, the deduplication is not only with respect to the mental health categories; we would need to do the same across the AIs (a person might be using more than one AI).
Bottom-Line On The Counts
By the use of a back-of-the-envelope approach, we might suggest that there are about 5.5 million people who, on a weekly basis, are experiencing a mental health condition on a global basis when using AI and as detected by AI.
ChatGPT would seem to be the bulk of those instances, approximately 3 million people, though this is predicated on the assumption that of the 1.5 billion all-users, there are 800 million ChatGPT users. We must also be cognizant of the aspect that these are only the detected instances. The AI might be missing a sizable portion of users and be unable to sufficiently catch those who are experiencing mental health concerns. Another nitty-gritty is that we are assuming the percentages apply across the other AIs, and we are assuming that there aren’t more than just the three categories of mental health qualms.
Not wanting to pour fuel on that fire, but we should be wondering about the timing underlying these considerations. Here’s what I mean. You can inspect statistics on when people tend to go see a human therapist, and in doing so, there is often a time-based pattern involved. During certain times of the year, the numbers seem to rise. At other times of the year, the numbers seem to reduce.
It could be that the 0.07% in the psychosis or mania category is based on a snapshot in time, and the same might be the case for the other reported percentages. If the time period selected or inspected is at a low ebb, the percentage is an undercount of what might later occur. We would certainly be further interested in whether the percentage is moving over time, perhaps increasing. Temporal tracking would be quite insightful and helpful.
The point is that there are many layers of assumptions, and we must be mindful of making any over-the-top conclusions correspondingly.
Big Problem Or Small Problem
If these numbers are anywhere near the true count, which maybe they are and maybe they aren’t, what can we make of them?
First, consider the 3 million weekly users of ChatGPT who are in the three categories. Should we be worried about those people? Yes, of course. Each person is worth our attention. These people could be someone you know, a friend, a relative, a partner, and could be someone you don’t know (but that doesn’t matter). Our compassionate view is that each user deserves help regarding their mental health care.
Second, in the aggregate, can we get a handle on how big or small the number of 3 million people is? Let’s compare this to the population of various US states. There are at least fifteen states that have a population that’s fewer than three million people, such as New Mexico, Nebraska, Idaho, Maine, Rhode Island, etc. Thus, we are pondering the mental health status of a count of users the size of those respective states. I would suggest that ought to cause you to pause and think things over.
For the count of perhaps 5.5 million people across all major AIs, now we are reaching the size of 30 states that have a population less than that amount. This encompasses states such as Alabama, Louisiana, Kentucky, Connecticut, Utah, etc. Again, that suggests this is an issue encompassing a relatively large number of people.
Making a comparison to state sizes is somewhat delicate since the counts are based on global usage. The 3 million and the 5.5 million are people using AI throughout the world. In any case, for purposes of visualizing the magnitudes, it is reasonable to mull over the size of state populations on a comparative basis.
Clearing Up The AI Role
An important distinction that needs to be pointed out is that we should not tumble into the mental trap of assuming that these people are encountering mental health issues necessarily due to the acts of AI.
Do not conflate a presumed cause and effect.
For example, of the 1.2 million users of ChatGPT who expressed some form of self-harm intentions, we do not know whether the AI led them to that intention. It could be that some or maybe many were seeking out AI after having already decided to go down that path. The key is that the AI didn’t necessarily push them in that direction at the get-go. Some of the users might have pursued AI and even searched the web to find out about the self-harm topic, rather than having been stirred by the online capabilities to pursue the matter from the start.
I’ve previously examined the question of AI as a driver of human behavior versus a collaborator of human behavior in these situations (see the link here). I am hoping that AI makers will either release the data associated with these percentages so that we can dig underneath the numbers or at least do the grunt work themselves and provide a more granular indication of how things look under the hood.
The Helping Side Of AI
I’ve got a few more twists for you that go beyond the surface-level assessment of these numbers.
A crucial question that ought to be raised is what the AI did once the detection of these users was computationally determined. You see, for the half-million users of ChatGPT who seemed to be experiencing psychosis or mania, did the AI talk them out of it, or did the AI hand off the conversation to a human therapist, or what transpired? How successful was this as a mental health intervention?
OpenAI had announced previously that they are setting up a curated network of therapists, providing a real-time, seamless means of connecting a user with a human therapist. I believe this is laudable and will be a kind of mental healthcare backstop that all AI makers are going to inevitably employ, see my discussion at the link here.
The last point for now is something that might raise some eyebrows. Here we go. Besides counting those who appear to be encountering a mental health issue, how many users were proactively aided by AI and improved their mental health?
If we are counting those who had a mental health qualm, we might want to look at the other side of the coin, too.
It is conceivable that the AI prevented some number of users from cycling down into a mental health abyss. The idea is that they came into using the AI and did not have a mental health issue at play, nor did the AI stir them into a mental health issue. Instead, the AI bolstered their mental health by giving sound advice or prudent guidance during their AI conversation.
I mention this surprising facet because it is vital to realize that the use of AI in a mental health context is a tradeoff. The AI can be on the dour side of the coin and cause mental health issues or spur mental health issues. In the same light, we need to give credit where credit is due, namely that AI can be a helpful 24×7 source of mental health advisement, even when someone wasn’t seeking mental health advice and nor were they particularly in need of it.
Let’s see those upside percentages so we can get a fuller picture of what’s happening.
Keep The Data Flowing
The world is embarking upon a humongous experiment that is taking place on a wanton basis, and we are all guinea pigs, somewhat involuntarily participating. The experiment is that generative AI and LLMs can generate mental health advice, doing so at the touch of a button and whenever and wherever a person might be.
In the long run, will we be better off with this widespread access or worse off?
Now is the time to take the pulse of how AI is being used in a mental health context, along with shaping how it should be used. Collecting data, interpreting the data, and exploring statistics will be a fruitful means of figuring this out.
As the sage words of Albert Einstein noted: “Not everything that can be counted counts, and not everything that counts can be counted.” In the case of AI, let’s count the right counts and use them wisely.


