From Adi Robertson's 5-13-25 THE VERGE article entitled "AI Therapy Is a Surveillance Machine in a Police State":
Mark Zuckerberg wants you to be understood by the machine. The Meta CEO has recently been pitching a future where his AI tools give people something that “knows them well,” not just as pals, but as professional help. “For people who don’t have a person who’s a therapist,” he told Stratechery’s Ben Thompson, “I think everyone will have an AI.”
The jury is out on whether AI systems can make good therapists, but this future is already legible. A lot of people are anecdotally pouring their secrets out to chatbots, sometimes in dedicated therapy apps, but often to big general-purpose platforms like Meta AI, OpenAI’s ChatGPT, or xAI’s Grok. And unfortunately, this is starting to seem extraordinarily dangerous — for reasons that have little to do with what a chatbot is telling you, and everything to do with who else is peeking in.
This might sound paranoid, and it’s still hypothetical. It’s a truism someone is always watching on the internet, but the worst thing that comes of it for many people is some unwanted targeted ads. Right now in the US, though, we’re watching the impending collision of two alarming trends. In one, tech executives are encouraging people to reveal ever more intimate details to AI tools, soliciting things users wouldn’t put on social media and may not even tell their closest friends. In the other, the government is obsessed with obtaining a nearly unprecedented level of surveillance and control over residents’ minds: their gender identities, their possible neurodivergence, their opinions on racism and genocide [...].
Even if you’re one of the few American citizens with truly nothing to hide in your public or private life, we’re not talking about an administration known for laser-guided accuracy here. Trump officials are notorious for governing through bizarrely blunt keyword searches that appear to confuse “transgenic” with “transgender” and assume someone named Green must do green energy. They reflexively double down on admitted mistakes. You’re one fly in a typewriter away from everybody else.
In an ideal world, companies would resist indiscriminate data-sharing because it’s bad business. But they might suspect that many people will have no idea it’s happening, will believe facile claims about fighting terrorism and protecting children, or will have so much learned helplessness around privacy that they don’t care. The companies could assume people will conclude there’s no alternative, since competitors are likely doing the same thing.
To read the entire article, click HERE.