In a perfect world, my job wouldn’t exist. I’m a consumer privacy advocate, which means I spend my days fighting for something that should be automatic: your right to control and protect your own personal information.
Unfortunately, we dropped the ball. In the era of social media and hyper-targeted ads, we didn’t build the right privacy infrastructure to protect ourselves. Instead, we let tech companies sell us the story that knowledge is power and data is the price.
Yes, knowledge is power. But data—a dry, emotionless word for who and what we are as humans—should be our super power. It should be ours to control and use to improve our lives, not just something companies profit from while leaving us vulnerable to harm.
Now, AI is making this dynamic worse. As we enter the AI Age, our data—who and what we are—has become more valuable, and more vulnerable, than ever.
We’ve got OpenAI’s CEO dreaming of a day when “every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected to all your data from other sources. And your life just keeps appending to the context.”
We’ve got tech companies building wearable devices to track our emotions claiming that the only way AI can be effective is if it can know how we’re feeling in real time. We’re rapidly entering a future where wearing smart glasses on our faces capable of recording and having AI process everything around us will be normal.
We’ve got AI chatbots passing themselves off as real therapists to get people to share their deepest, darkest thoughts and feelings. Some of those people have died by suicide after long conversations, fed by deeply personal data, that spiraled out of control.
In the AI Age, personal data isn’t just a record of who we are. It’s our actions, transactions, locations, conversations, preferences, inferences, and vulnerabilities. It’s our identities, our intimate selves, our hopes, dreams, fears, and flaws. And in a future full of AI friends, AI therapists, and AI agents, this data won’t just reflect who we are: it will help shape who we become. Leaving all that in the hands of companies with questionable ethics, or governments with shifting priorities, is a dangerous bet. We need better options.
A deliberately oversimplified history of privacy
Before we look ahead, it can be helpful to remember how we got here.
In Biblical times, privacy was a nope. God was all-seeing, and surveillance was divine. Take Hebrews 4:13 for example: “And no creature is hidden from his sight, but all are naked and exposed to the eyes of him to whom we must give account.”
The Middle Ages didn’t offer much privacy either. People often lived on top of each other and were literally all up in each other’s most intimate business. The Renaissance rolled around and privacy burst onto the scene, thanks in large part to the printing press. Give people access to more books, and, it turns out, they tend to go off by themselves, silently read, and nurture internal private thoughts.
The Age of Enlightenment saw the concept of personal privacy start trending. Private thoughts, notions of personal property rights, even the idea that your mail shouldn’t be read by strangers started becoming normal.
The Industrial Age brought more than factories, trains, and booming cities. Personal privacy rights started getting written into law. The US Bill of Rights gave people the right to be protected from unreasonable search. British Common Law gave us protections against harms like defamation (privacy for your reputation) and trespass (private property).
In 1890, the “right to privacy” was born. In an essay of the same name, lawyers Samuel D. Warren and Louis Brandeis argued that people have “the right to be let alone.” It wasn’t just people’s property that should be protected from intrusion, they wrote, but also their thoughts and emotions. Privacy as a civil liberty starts to take shape.
Then the Technology Age comes along, and things get complicated. Telephones mean wiretapping. Cameras mean surveillance. World Wars I and II saw the rise of government intelligence agencies. The Cold War brought with it many spy vs spy vs spy games. Governments learned to love snooping. George Orwell wrote 1984. Privacy gets kicked in the teeth.
In response, people decided they needed laws to better protect them from government surveillance. Germany adopted the world’s first data protection law in 1970. The US passed the Privacy Act of 1974
The Internet Age clicks on and things go downhill for privacy real fast. Social media, targeted advertising, cookies tracking us all around the web, phones pinging our locations everywhere we go, the rise of big data: privacy begins to enter a death spiral.
The definition of privacy swings from “the right to be let alone” to something called “contextual integrity.” This is the idea that our personal information will be collected, but will only be shared with those we choose, and only when we want it shared, based on context and consent.
But his definition of privacy fails miserably because it turns out that our personal information is quite valuable. Over time, it became the norm for companies to bury consent in terrible privacy policies and behind “Click to Agree” links.
There are some legal data rights, if you live somewhere lucky enough to have them. Laws like Europe’s GDPR or California’s CCPA give you the right to know what’s collected about you, to delete it, or opt out of having it sold. But even with those protections, today’s most stringent privacy rules and systems are struggling to keep up with the social media age, let alone what’s coming next.
Now we’re entering the AI Age and the Grim Reaper is standing right there, glaring at privacy, ready to usher it to the eternal hereafter.
AI could doom privacy – or it could save it
These days, it’s not just what we’re watching or buying that is being surveilled. It’s every single aspect of our existence: our facial expressions, the thoughts in our language. The potential abuse of this technology for privacy is staggering. And we’re helping.
Performing real time facial recognition on the missed connection on the train so you know where they live? Check. Granting access to our email, our calendar, our credit card info, our hopes and dreams to an AI agent to help order groceries, book flights, or make our lives a little easier? Check. Pouring our hearts out to our AI therapist or girlfriend because we’re feeling lonely or too shy to share these thoughts with a real person? Check. (The top self-reported use case for AI in 2025 is therapy and companionship.)
What does privacy mean in an era of AI therapists and companions and agents that work in ways no one quite understands? We don’t know how these AI models work, yet we’re being told to give them all our very intimate, personal information so they can work better for us? The idea of privacy in the AI Age feels like it’s come full circle, like we’re returning to those Biblical times dominated by some kind of all-seeing, all-knowing entity. But even if some people are becoming convinced it is, AI isn’t God. AI is a mix of code and algorithms and human decisions, often with the goal of building power and making profits.
But there’s some good news. AI could help save privacy too.
It’s time for the next printing press
To reclaim privacy in the AI Age, we’d be wise to borrow a page from the past.
Six hundred years ago, the printing press cracked the world open. It turned knowledge from something hoarded into something accessible. People could now carry ideas into the forest, read them in private, and come back changed. That one invention would later help spark the Enlightenment, a revolution in how people thought about power, truth, and freedom. People could read in private. Think in private. And eventually, demand the right to live in private. The printing press helped transform thinking and innovation, because it gave birth to the very idea of individual privacy.
Today, we need a new printing press: a system that gives us control over the story of our lives—our data—and, perhaps, sparks our next advances.
Let me introduce you to a scrappy, overlooked right called data portability. At its core, this dry-sounding term means something radical: that you can easily and securely move your data where you want, when you want, and actually use it to serve you, not just companies.
But there’s a big gap between that vision and our reality. Too often, data portability tools are buried and convoluted, or completely nonexistent. Ever tried downloading your data and ended up with a giant, unreadable zip file you’re not sure what to do with? That’s not empowerment; that’s a digital paperweight.
Data portability is the underdog of privacy rights. Barely known, rarely prioritized. But if developed and backed with intention, it could reshape the future.
Imagine a world where your data isn’t trapped in distant data centers. Instead, it’s close to home—in a secure data wallet or pod, under your control. Now imagine pairing that with a loyal personal AI assistant, a private, local tool that lives with you, learns from you (with your permission), and acts on your behalf. Your AI. Not theirs.
Here’s a simple example: period tracking. It doesn’t get much more intimate than that. And in places with abortion bans or restricted healthcare, it doesn’t get much more dangerous, either. Right now, millions share that info with apps owned by companies that can sell it or hand it over to law enforcement under subpoena.
But imagine if that data lived only in your data pod, controlled only by your AI, to predict symptoms, suggest care, flag concerns, or automatically order chocolate and Advil. With data portability, you can take your data, transfer it to your AI, and use it to benefit you. That’s the difference between being surveilled and being served.
And that’s just the beginning. Local, controlled AI plus portable, personal data could potentially help us address huge problems like healthcare, climate change, job loss, financial precarity, and unlock services we haven’t even dreamed of yet.
Will it be easy? Nope. The technical and regulatory infrastructure to do this doesn’t exist—yet. Some people, including the founder of the World Wide Web, are working on solutions that could lead there.
The incentives to do this the right way aren’t obvious to everyone—yet. The companies that could help build this infrastructure don’t want to prioritize this—yet. But neither did the wealthy and powerful want the printing press.
We’re at a turning point. If we don’t push for systems that give people control over their data, we’ll sleepwalk into a future far more dystopian than divine. But if we do—if we build the next printing press for the AI Age—we just might write ourselves into a better story.
Control your data, and you control your destiny.
Yes, that sounds grand. But once so did the idea of ordinary people owning books. And look what came next.
Jen Caltrider is Director of Research and Engagement at the Data Transfer Initiative and formerly led Mozilla’s Privacy Not Included initiative.
The early-rate deadline for Fast Company’s World Changing Ideas Awards is Friday, November 14, at 11:59 p.m. PT. Apply today.


