HomeTechAI chatbots are giving out people’s real phone numbers

AI chatbots are giving out people’s real phone numbers

Date:

Related stories

Three things in AI to watch, according to a Nobel-winning economist

Two years later, Acemoglu’s measured take has not...

The shock of seeing your body used in deepfake porn 

It’s not certain whether this activity falls within...

Data readiness for agentic AI in financial services

For a financial services firm, managing this can be...

The offline desk gadget that actually got me to sit up straight

Working from home has its own perils. Pets can...
spot_imgspot_img

And in April, a PhD candidate at the University of Washington was messing around on Gemini and got it to cough up her colleague’s personal cell phone number. 

AI researchers and online privacy experts have long warned of the myriad dangers generative AI poses for personal privacy. These cases give us yet another scenario to worry about: generative AI exposing people’s real phone numbers. (The Redditor did not respond to multiple requests for comment and we could not independently verify his story.)

Experts say that these privacy lapses are most likely due to the use of personally identifiable information (PII) in training data, though it’s hard to understand the exact mechanism causing real phone numbers to show up in the AI-generated responses. But no matter the reason, the result is not fun for people on the receiving end—and,even more worryingly, there appears to be little that anyone can do to stop it. 

A 400% increase in AI-related privacy requests

It’s impossible to know how often people’s phone numbers are exposed by AI chatbots, but experts say they believe that it is happening far more than is reported publicly. 

DeleteMe, a company that helps customers remove their personal information from the internet, says customer queries about generative AI have increased by 400%—up to a few thousand—in the last seven months. These queries “specifically reference ChatGPT, Claude, Gemini … or other generative AI tools,” says Rob Shavell, the company’s cofounder and CEO. Specifically, 55% of these concerns about generative AI reference ChatGPT, 20% reference Gemini, 15% Claude, and 10% other AI tools, Shavell says. (MIT Technology Review has a business subscription to DeleteMe.)

Shavell says customer complaints about personal information surfaced by LLMs usually take two forms. In one common situation, “a customer asks a chatbot something innocuous about themselves and gets back accurate home addresses, phone numbers, family members’ names, or employer details.” Alternatively, a customer may be confronted with and report the exposure of someone else’s personal data, when “the chatbot generates plausible-but-wrong contact information.” 

This aligns with what happened to Daniel Abraham, a 28-year-old software engineer in Israel. In mid-March, he says, a stranger sent him a “weird WhatsApp message from an unknown number” asking for help with his account in PayBox, an Israeli payment app. 

“I thought it was a spam message,” he wrote to MIT Technology Review in an email—“someone who was trying to troll me.”

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories

spot_img