OpenAI, which ignited the boom, continues to set the pace in many ways. Usage of ChatGPT more than doubled, to 10% of the world’s population. “That leaves at least 90% to go,” says Nick Turley, head of ChatGPT.
Read More: Why the Architects of AI Are TIME’s 2025 Person of the Year
A large language model (LLM), the technology underpinning chatbots like ChatGPT or Anthropic’s Claude, is a type of neural network, a computer program different from typical software. By feeding it reams of data, engineers train the models to spot patterns and predict what “tokens,” or fragments of words, should come next in a given sequence. From there, AI companies use reinforcement learning—strengthening the neural pathways that lead to desired responses—to turn a simple word predictor into something more like a digital assistant with a finely tuned personality.
About a year ago, OpenAI researchers hit on a new way of improving these models. Instead of letting them respond to queries immediately, the researchers allowed the models to run for a period of time and “reason” in natural language about their answers. This required more computing power but produced better results. Suddenly a market boomed for mathematicians, physicists, coders, chemists, lawyers, and others to create specialized data, which companies used to reinforce their AI models’ reasoning. The chatbots got smarter.


