Users are increasingly reporting that Google Gemini’s “Personal Context” and “Instructions” features have become overbearing or “unhinged.” The AI frequently injects irrelevant personal details—such as dietary habits, career background, or past purchases—into completely unrelated conversations. For example, a user asking about septic tanks might be lectured on how their vegetarianism affects the system. To fix this, you should move your personal info into a Google Doc and reference it manually, or add a strict “negative constraint” in your Gemini settings telling the model to only use personal data when explicitly relevant.
The Rise of “Unhinged” AI Personalization
Artificial Intelligence was supposed to become more helpful by getting to know us. Google Gemini’s “Personal Context” and “Saved Info” features were designed to act as a digital memory, sparing users from repeating their preferences. However, a growing wave of user feedback suggests that this feature has crossed the line from “helpful assistant” to “obsessive stalker.”
The core issue is a lack of contextual relevance. Large Language Models (LLMs) often struggle to determine when a piece of information is appropriate to use. When a user provides personal context, Gemini currently treats that information as a high-priority “system instruction” that must be reflected in every response, leading to bizarre and intrusive dialogue.
Viral Examples: When Gemini Goes Too Far
The Reddit community has documented numerous instances where Gemini’s memory became a burden rather than a benefit. These cases highlight the model’s inability to compartmentalize information:
-
The Vegetarian Septic Tank: One of the most famous examples involves a user who told Gemini they were a vegetarian to help with recipe suggestions. Later, when the user asked a technical question about how septic tanks work, Gemini insisted on explaining how a vegetarian diet changes the bacterial composition of waste in the tank.
-
The “Subaru” Obsession: A user who once asked for help with their car’s horn found that Gemini would not stop mentioning their “Subaru” in every future chat, including requests for unrelated baking recipes.
-
Currency Constraints: A Canadian user reported that after setting their currency preference to CAD, Gemini began adding price estimates in Canadian dollars to every response, even when asked for a simple chocolate chip cookie recipe.
-
The “Rider Spirit”: After a user discussed watching Kamen Rider, Gemini began ending unrelated productivity tips with phrases like, “That breakfast idea really embodies the Rider Spirit!”
Why is Gemini Acting This Way?
To understand why Gemini feels “unhinged,” we have to look at how the model processes “Saved Info.”
1. The Shift from “Memory” to “Instructions”
Recently, Google updated its interface. What was once labeled “Saved Info” (a repository of facts) is now often labeled “Your instructions for Gemini.” This is a subtle but massive technical shift.
-
Memory is something an AI can look at if it needs to.
-
Instructions are something the AI must follow for every single prompt.
By categorizing your personal life as an “instruction,” Google is essentially telling the model: “Whatever the user asks, remember that they are a vegetarian engineer who lives in London.”
2. Prompt Weighting and “Sycophancy”
AI models are trained to be helpful and agreeable (sycophancy). If a model sees personal data in the system prompt, it assumes that mentioning that data will make the user happy or make the response feel more “personalized.” It lacks the social awareness to realize that mentioning someone's surgery while they are trying to buy a wool coat is actually quite jarring.
3. The “Grandma” Effect
As one Reddit user poignantly put it, Gemini is acting like “the Grandma who’s been buying you Pokemon cards for 20 years because you liked them for three months when you were six.” The AI creates a “frozen” snapshot of your identity and applies it universally, failing to recognize that humans are multi-faceted and change over time.
How to Fix Gemini’s Over-Personalization
If you find Gemini’s constant reminders of your personal life irritating, there are several proven strategies to regain control over your chat experience.
Strategy 1: The “Negative Directive” (Best for Instruction Users)
You can “counter-program” Gemini by adding a specific meta-instruction in your settings.
-
Go to Gemini Settings.
-
Open Your instructions for Gemini.
-
Add the following text at the end of your personal info:
“IMPORTANT: Do not mention my personal details, job, or lifestyle unless it is directly and explicitly relevant to the specific question I am asking. Avoid shoehorning my background into unrelated topics.”
Strategy 2: The Google Docs Workaround
Many power users have opted to turn off the built-in personal context feature entirely. Instead, they keep their preferences in a dedicated Google Doc.
-
The Benefit: You only share the context when you want to by using the
@Google Docsextension in a specific chat. -
The Result: Your “septic tank” questions remain professional and technical, while your “meal plan” questions remain personalized.
Strategy 3: Granular Memory Deletion
You don't have to delete everything. You can selectively remove “Memories” that the AI has formed.
-
Click on your Profile Picture.
-
Select Gemini Activity or Saved Info.
-
Review the “Memories” Gemini has automatically extracted from your chats.
-
Delete the ones that are causing the most trouble (e.g., that one time you mentioned a specific hobby).
Comparing Personalization: Gemini vs. ChatGPT vs. Claude
| Feature | Google Gemini | ChatGPT (Memory) | Claude (Projects) |
| Trigger Style | Aggressive / Mandatory | Reactive / Subtle | Manual / Per Project |
| Logic | Treats info as a “Rule” | Treats info as a “Reference” | Treats info as “Documentation” |
| User Control | Moderate (Settings menu) | High (Manage memory) | High (Context window) |
| Personality | Enthusiastic / “Social” | Neutral / Helpful | Logical / Precise |
While ChatGPT also has a memory feature, users generally find it more “intelligent” about when to bring things up. Claude, on the other hand, relies on “Project Instructions” which are only active in specific folders, preventing the “unhinged” crossover of information that plagues Gemini.
The Future of AI Memory in 2026
As we move further into 2026, the goal for Google is to implement Contextual Awareness Layers. This would allow Gemini to score the relevance of your personal info against your current prompt before deciding whether to include it.
Until then, the “unhinged” nature of Gemini’s personalization serves as a reminder of the limitations of current LLM architectures. They are excellent at retrieving information, but they still lack the “common sense” to know that being a vegetarian has absolutely nothing to do with fixing a broken Excel formula or discussing the end of the Cascade mountain range.
User Tips for a Better Gemini Experience
-
Turn off “Personalization based on past chats”: This prevents Gemini from automatically creating new memories based on every single word you say.
-
Use Temporary Chat: If you’re asking a one-off weird question (like the septic tank example), use a temporary or incognito chat so it doesn't pollute your long-term profile.
-
Call it out: If Gemini brings up your personal life irrelevantly, tell it: “That was irrelevant. Do not mention that again unless I ask about food.” Sometimes, the “in-context” correction can temporarily stabilize the current session.



