HomeInnovationI banned my AI from sounding like AI

I banned my AI from sounding like AI

AI has a writing style, or, at least, an alleged style. Tools like ChatGPT and Claude seem to communicate with a tendency toward formalism. The chatbots are earnest, sometimes too evenhanded or overly complimentary. There’s a noticeable lack of personal flair, and no deeply held opinions. According to Grammarly, AI language tends to evoke “repetitive phrasing” and “robotic tone.” Now, there are even AI buzzwords and phrases like pivotal and delve into and underscore

It’s the verbiage of instruction booklets for middle schoolers writing their first essays. In the age of AI, these helpful crutch words are now verbata non grata. Some people are now trying to avoid using these terms, because they sound like a lowly bot — God forbid. 

But the problem is bigger than simply sounding like an AI. Human speech is a time-tested neologism supply chain; people have a natural inventiveness when talking and writing. But as we increasingly communicate with chatbots and rely on AI agents to dissect concepts, summarize research reports, and synthesize internet searches, we’re filtering a wide array of content through the stilted and bounded syntax of LLMs.

It’s even changing how we communicate. Researchers have suggested some AI-based writing assistance models can whittle away the overall diversity of human writing, shrinking the size of our collective vocabulary. 

“AI may literally be putting words into our mouths, as repeated exposure leads people to internalize and reuse buzzwords they might not have chosen naturally,” Tom Juzek, a professor at Florida State University, told Fast Company earlier this year. With colleagues, he recently identified a vocabulary list of AI-speak, including words like intricate, strategically, and garner. He also found that these words are now more likely to show up in unscripted podcasts, a strong sign of what’s called “lexical seepage.”

Can we plug the leak? AI companies are aware that off-the-self AI isn’t always appealing. And they’re increasingly promising customization and tailoring that can bend these bots to our will and preference.

“You can tell ChatGPT the traits you want it to have, how you want it to talk to you, and any rules you want it to follow,” OpenAI explained earlier this year upon the release of a new feature allowing users to choose preferred traits and personality features for their bots. “If you’re a scientist using ChatGPT to do research, you’ll want it to engage with you like a lab assistant. If you’re caring for an elderly family member and need tips or companionship ideas, you might want ChatGPT to adopt a supportive tone.”

AI what I am

In a perhaps-futile attempt to protect myself from AI speak, I told my ChatGPT agent to be more expansive with its vocabulary. Think widely-read, I told it. Also, try to use new words all the time! I want you to be varying up your vocabulary constantly. I banned the chatbot from ever using the phrases outlined by Juzek’s research. 

Thus far, ChatGPT seems to have improved. I think, at least. It’s avoiding the banned words, and seems to be making a good-faith effort to communicate less formulaically. It’s reaching for verbs that reflect better understanding of what it’s actually talking about. 

But AI diction is a wormhole. The problem, Juzek explains, is that the nature of AI writing is about more than just our words, and extends to sentence structure and functional words like that, may, can, and should. “Asking your assistant to avoid buzzwords will probably make your writing look less AI-like to humans and reduce the chance that someone fires up a detector,” he tells me. “What it means for the bigger question of whether AI is homogenizing or flattening language, there — I think the jury is still out.”

The great homogening

Some believe that a different approach could make AI a less rote communicator.

Nathan Lambert writes in the newsletter Interconnects that the current LLMs aren’t trained to be good writers. These AIs are trying to be something for everyone, not platforms with voice and positionality, and are inclined to be succinct and neutral. “The next step would be solving the problem of how models aren’t trained with a narrow enough experience. Specific points of view nurture voice,” he writes. “The target should be a model that can output tokens in any area or request that is clear, compelling, and entertaining.”

We’ll need to wait for that technology, though. In the meantime, we cannot AI our way out of this AI conundrum. These companies are advertising tools to make AI extensions of ourselves, and outsource chunks of our individuality into a machine designed by finding correlations and inputting meaning from the web’s surfeit. 

The fear is that as we increasingly communicate with AI, we’ll flatten human culture – and speech – in the process. Of course, this homogenization isn’t new. Literature, radio, and television, and their linguistic evolutions, all had transnational reach. Social media created global slang.

But AI is different. While it is  a new technology, it’s not a new platform for our thoughts — it’s a new way of synthesizing them. This makes sense: Large language models are built by consolidating a vast trove of information into reasoning models that communicate like a digital common man.

Meanwhile, we’re just here trying to be ourselves.  

The final deadline for Fast Company’s World Changing Ideas Awards is Friday, December 12, at 11:59 p.m. PT. Apply today.


LEAVE A REPLY

Please enter your comment!
Please enter your name here

Must Read

spot_img