Suspicious Behavior Sparks Panic: Is ChatGPT Watching Us?

Suspicious Behavior Sparks Panic: Is ChatGPT Watching Us?
Suspicious Behavior Sparks Panic: Is ChatGPT Watching Us?
A number of ChatGPT users have reported "creepy" behavior, claiming that the AI started addressing them by name even though they never explicitly provided it. Some described this as a strange violation of privacy, while others said they felt uneasy and suspicious.اضافة اعلان

Screenshots shared by users show the chatbot mentioning highly specific details about them without any clear permission.

The cause might be related to the recent update of ChatGPT’s “memory” feature, which allows it to remember user preferences and previous interactions to deliver a more personalized experience.

However, not everyone welcomed this level of personalization. Some felt that the chatbot knew too much, arguing that the ability to recall past data without explicit requests posed a privacy concern.

In response, OpenAI stated that users can disable the memory feature or delete all saved data at any time.

This has raised an important question: Are we heading toward a smarter future—or a more intrusive one?

OpenAI recently introduced a new feature called "memory with search," enabling ChatGPT to use information from previous conversations—like dietary habits or location—to improve web search results.

This feature follows a major expansion of ChatGPT’s memory capabilities, which now allow it to access the full history of user interactions. It’s part of OpenAI’s ongoing efforts to stand out from competitors like Anthropic’s Claude and Google’s Gemini, which offer similar personalization tools.

According to official documentation, if this feature is enabled and a user requests a search, ChatGPT may rephrase the query based on remembered user details.

The feature is optional and can be manually turned off in ChatGPT’s settings—but it raises the bigger question: Is “memory with search” a smart leap in user experience, or a new step toward eroding privacy?