I turned off ChatGPT Memory on day one and never turned it back on.
That is not a rejection of personalization. It is a refusal to outsource my long‑term context to a black box. The moment a model can “remember me,” the model also starts shaping me. That subtle loop is why I keep my context explicit and external.
My thesis is simple: memory should be portable, inspectable, and reversible. A platform memory is none of those.
Memory turns context into an invisible product
There is a difference between “context” and “memory.” Context is what I choose to provide, in the moment, for a specific task. Memory is what the system retains across tasks without me explicitly asking for it. That makes memory feel convenient, but it changes who owns the interface.
When memory is automatic, I cannot tell which part of the output is me and which part is a hidden profile. I cannot review the feature store. I cannot diff what changed. I cannot reset with confidence. The prompt becomes a moving target, and I stop being able to reason about why the model behaves the way it does.
For me, that is a dealbreaker. I want my tools to be legible. I want to understand what inputs drove the outcome. Memory that I cannot see breaks that mental model.
The soft problem: information bubbles by design
A memory system will always optimize for relevance. Relevance is not neutral. It is a policy choice that reinforces patterns. The more a model “knows” about me, the easier it becomes to filter and smooth the information it offers.
That is fine for music recommendations. It is dangerous for thinking. I do not want my assistant to become a comfort machine that only mirrors my established preferences. I want it to surprise me, to bring in adjacent ideas, and to argue with me when I am wrong.
If the model learns that I like a certain style or worldview, it will likely lean into it. That creates a polite feedback loop: more of what I already like, less of what I do not. The result is a mild, friendly echo chamber. It is not obvious in a single response, but it accumulates.
I would rather keep the tool blunt and let myself do the curation.
The hard problem: platform lock‑in disguised as convenience
Memory feels like a personal notebook, but it is not. It is a proprietary layer tied to one vendor’s policies and models. If I migrate platforms, my memory cannot follow. If the vendor changes the rules, my memory changes with it. If I want to audit what is stored, I cannot.
That lock‑in is subtle because the feature is framed as a gift. But the cost appears later, when your workflows rely on it and you cannot leave without losing your “self.”
I refuse to put my long‑term preferences in a box I cannot export. If I want a personal knowledge base, I will build it in plaintext files, on my machine, under version control.
Memory hides the most important debugging surface
I build agents. When they go wrong, I need to know why. That means tracking the exact context that was provided. With memory, there is always an invisible diff between yesterday’s context and today’s.
That makes debugging nearly impossible:
- Did the model misunderstand me today, or did it learn something yesterday?
- Did a response change because the model updated, or because the memory drifted?
- If I want to reproduce a result, which version of memory do I need?
I have learned to treat prompts like programs. Programs only work if the inputs are deterministic. Hidden memory breaks determinism. So I do not use it.
I still want personalization, just not the hidden kind
This is not a purity argument. I do want tools to adapt to me. I just want that adaptation to be explicit and editable.
My version of memory looks like this:
- A
PROFILE.mdfile with my writing voice, preferences, and do‑not‑do rules. - A
CONTEXT.mdfile for a specific project with the current thesis and constraints. - A
NOTES.mdfile for things I want to remember across sessions.
These files live in a repo. I can version them. I can share them across tools. I can cut a release, or roll back a bad change. That gives me the benefits of memory without the opacity.
The control loop matters more than the convenience
If memory saves me a few sentences of typing, that is nice. But the cost is a loss of control. I cannot always tell when memory is helping versus when it is quietly steering me.
I care about the control loop because I use AI as a thinking partner, not just a text generator. If the partner keeps a private profile of me, I do not know who I am actually talking to.
A good assistant should be predictable and accountable. It should behave as if it has no memory unless I explicitly hand it one. That is the contract I want.
When memory could be useful (and why I still skip it)
There are real use cases for persistent memory:
- Customer support agents that should remember user preferences.
- Long‑running coaching relationships where continuity matters.
- Accessibility tools where remembering preferences is helpful and humane.
I respect those cases. I also think they require stronger transparency than we have today. If memory comes with an audit log, export, and deletion that actually works, I will reconsider. But today it feels like a one‑way door.
My default is simple: no hidden state.
My current workflow instead
Here is the practical alternative I use:
- I start each task with a short prompt that declares the goal and the constraints.
- I keep project memory in versioned files and paste the relevant parts in.
- I maintain a “working set” of notes so I can switch tools without losing myself.
- I periodically prune these files so the context stays sharp.
This sounds slower, but it is faster in the long run. My context is portable. My results are reproducible. I can prove what the model saw when it answered.
The bigger claim: ownership of cognition
AI tools will become the default interface for knowledge work. That makes the memory question political. Whoever owns your memory can influence your output. Whoever can edit your memory can shape your choices.
I want my cognitive surface area to be mine. That means I treat AI memory like I treat identity: not something I hand to a platform and hope it behaves.
This is not paranoia. It is a preference for autonomy. I would rather type a little more than let a hidden profile define my thinking.
Closing thought
Memory should be a file I can read, not a feature I cannot see. Until it is, I will keep my context explicit and my tools honest.