AI Memory — a shared brain for every AI you use.
Most AI memory is per-tool, hidden, and locked in. Knovya is the shared layer — your notes, decisions, and context, exposed to Claude, ChatGPT, Cursor and every MCP-capable client through one open protocol. Free includes 50 calls a month; Pro lifts that to 5,000 with end-to-end encryption.
The same question. Three AIs. Memory off, then on.
Pick a client. Flip the toggle. Watch what changes when an AI has access to the work you have already done — and the work it cites is yours, not invented.
Could you share what you were working on? I do not have a record of last quarter's plans, decisions, or commitments — every conversation starts fresh on my side. If you can paste the relevant notes or describe the projects, I can help shape next quarter from there.
Four kinds of memory, one shared layer.
Cognitive science separates memory into kinds. So does Knovya. Each layer answers a different question — and every layer is reachable from every AI client through the same MCP surface.
Working memory
"What are we talking about right now?"
The note open in your editor. The fragment your AI is reasoning over. Live state, fetched on demand — never stale, never re-explained.
Long-term memory
"Everything you have ever written down."
Your full knowledge base — notes, decisions, retrospectives, voice transcripts, saved chats. Searchable across hybrid keyword and vector.
Procedural memory
"How you do things around here."
Templates, AI Skills, naming conventions, decision frameworks. The patterns the AI inherits from your past work — without you spelling them out.
Episodic memory
"What worked. What didn't. What is still open."
Outcomes attached to past projects. The Experience Envelope groups precedents by success, partial, or cautionary — so the AI cites real lessons, not invented ones.
Mem0 is a primitive — bring your own architecture. Claude memory is a notebook — Claude can read it. Knovya gives you all four layers and exposes them through the open Model Context Protocol so every AI you use sees the same brain.
33 tools · OAuth 2.1 · E2E encryption
Every AI conversation starts from zero —
and you keep paying the re-explanation tax.
Claude added memory in March 2026. ChatGPT had memory before that. Mem0 sells memory as infrastructure. Every product solved the same problem inside its own walls.
So now you have three different memories, none of them talk to each other, and the work you did yesterday in Cursor is invisible to Claude this morning.
- The cost
- Every prompt that begins with "as I mentioned in my last conversation" is a tax — paid in tokens, paid in attention, paid in trust eroded the third time the AI forgets.
- The fix
- Stop putting memory inside each tool. Put memory under all of them. Open protocol. Notes you own. Same brain everywhere.
From the Memex to a brain you actually own.
AI memory is not invented from nothing. Five lines of work taught it what to remember — and where the memory should live.
- 1945Vannevar Bush — As We May Think The Memex: an external, associative store for the things a mind cannot hold. Eighty years before MCP, the same idea — memory should be infrastructure, not a feature inside a single tool. The Atlantic · July 1945
- 1985Wegner — Transactive Memory Couples and teams remember more than individuals because they hold a "who knows what" map. Memory as a shared system, not a private store — the cognitive science that grounds shared AI memory. Psychology · group cognition
- 2024Anthropic — Model Context Protocol An open standard for how an AI client reads from and writes to external systems. Treating memory as a protocol rather than a vendor feature — the move that made shared layers possible. Open spec · cross-vendor
- 2025Mem0 — Production memory benchmarks The ECAI 2025 paper put numbers on the question: how do you extract, consolidate, and retrieve facts from long conversations without dragging the full transcript every time. Memory became measurable. arXiv:2504.19413 · ECAI
- 2026Knovya AI Memory All four ideas composed into one product: a notes layer for humans, a knowledge graph for retrieval, an open MCP surface for every AI — and you keep the keys. ★ Knovya · production
Nobody gives AI a brain that is actually yours.
Every product solved memory inside its own walls. ChatGPT remembers you in ChatGPT. Claude remembers you in Claude. Mem0 remembers you for the agent your developer ships. None of them follow you across tools — and none of them give you the keys.
- ChatGPT memory per-tool · hidden · OpenAI-only
- Claude memory per-tool · file-based · Anthropic-only
- Mem0 developer SDK · no UI · primitive
- Perplexity Spaces docs only · no graph · no MCP
- Notion AI in-app context · no MCP surface
- ★ Knovya notes + graph + 33 MCP tools · shared
The same memory, everywhere you reach.
Write once in Knovya. Read it from Claude, ChatGPT, Cursor, or any other MCP-capable client. Same notes, same graph, four different surfaces.
The notes layer. Block editor, knowledge graph, Experience Envelope — the place memory accumulates. Every note here is reachable from every AI client below.
Claude pulls context mid-conversation. Cites the notes by title, surfaces outcome badges, never invents a precedent that does not exist in your KB.
Cursor's Composer reads your decision logs and architectural notes through the same MCP server. The repo it knows; the reasoning lives in Knovya.
Underneath every surface above sits the same JSON. One server, OAuth-protected. Any MCP-capable client speaks it — Gemini, Copilot, Windsurf, Goose.
AI Memory composes with the rest of Group I.
A few honest answers.
What is AI memory?
Does Claude have memory now?
Does ChatGPT have memory?
How is Knovya different from Mem0?
What MCP clients work with Knovya AI Memory?
Can I import my ChatGPT memory into Knovya?
Is my AI memory encrypted?
How many AI memory calls do I get on the Pro plan?
Connect every AI you use in 60 seconds.
Free includes 50 MCP calls per month — enough to connect one client and feel the difference. Pro lifts that to 5,000 with end-to-end encryption.
element 02 · Group I — AI