AI Memory — a shared brain for every AI you use.

Most AI memory is per-tool, hidden, and locked in. Knovya is the shared layer — your notes, decisions, and context, exposed to Claude, ChatGPT, Cursor and every MCP-capable client through one open protocol. Free includes 50 calls a month; Pro lifts that to 5,000 with end-to-end encryption.

MCP tools
33
AI clients connected
7
Memory you own
1
AI Memory
Experiment 01 · The Continuity Test

The same question. Three AIs. Memory off, then on.

Pick a client. Flip the toggle. Watch what changes when an AI has access to the work you have already done — and the work it cites is yours, not invented.

You ask

"Help me plan next quarter — pick up from where we left off."

Asking via
Claude · response Stateless · no shared memory

Could you share what you were working on? I do not have a record of last quarter's plans, decisions, or commitments — every conversation starts fresh on my side. If you can paste the relevant notes or describe the projects, I can help shape next quarter from there.

Free · 50 MCP calls / month · 1 client connected. Pro · 5,000 MCP calls / month · all clients · end-to-end encryption → See Pro
All four · what AI Memory holds

Four kinds of memory, one shared layer.

Cognitive science separates memory into kinds. So does Knovya. Each layer answers a different question — and every layer is reachable from every AI client through the same MCP surface.

Layer I

Working memory

"What are we talking about right now?"

The note open in your editor. The fragment your AI is reasoning over. Live state, fetched on demand — never stale, never re-explained.

MCP tools knovya_read · knovya_search · knovya_context
Layer II

Long-term memory

"Everything you have ever written down."

Your full knowledge base — notes, decisions, retrospectives, voice transcripts, saved chats. Searchable across hybrid keyword and vector.

MCP tools knovya_search · knovya_folders · knovya_links
Layer III

Procedural memory

"How you do things around here."

Templates, AI Skills, naming conventions, decision frameworks. The patterns the AI inherits from your past work — without you spelling them out.

MCP tools knovya_templates · knovya_skills · knovya_persona
Layer IV

Episodic memory

"What worked. What didn't. What is still open."

Outcomes attached to past projects. The Experience Envelope groups precedents by success, partial, or cautionary — so the AI cites real lessons, not invented ones.

MCP tools knovya_experience · knovya_history

Mem0 is a primitive — bring your own architecture. Claude memory is a notebook — Claude can read it. Knovya gives you all four layers and exposes them through the open Model Context Protocol so every AI you use sees the same brain.

33 tools · OAuth 2.1 · E2E encryption

Every AI conversation starts from zero
and you keep paying the re-explanation tax.

Claude added memory in March 2026. ChatGPT had memory before that. Mem0 sells memory as infrastructure. Every product solved the same problem inside its own walls.

So now you have three different memories, none of them talk to each other, and the work you did yesterday in Cursor is invisible to Claude this morning.

The cost
Every prompt that begins with "as I mentioned in my last conversation" is a tax — paid in tokens, paid in attention, paid in trust eroded the third time the AI forgets.
The fix
Stop putting memory inside each tool. Put memory under all of them. Open protocol. Notes you own. Same brain everywhere.
The lineage

From the Memex to a brain you actually own.

AI memory is not invented from nothing. Five lines of work taught it what to remember — and where the memory should live.

  1. 1945
    Vannevar Bush — As We May Think The Memex: an external, associative store for the things a mind cannot hold. Eighty years before MCP, the same idea — memory should be infrastructure, not a feature inside a single tool. The Atlantic · July 1945
  2. 1985
    Wegner — Transactive Memory Couples and teams remember more than individuals because they hold a "who knows what" map. Memory as a shared system, not a private store — the cognitive science that grounds shared AI memory. Psychology · group cognition
  3. 2024
    Anthropic — Model Context Protocol An open standard for how an AI client reads from and writes to external systems. Treating memory as a protocol rather than a vendor feature — the move that made shared layers possible. Open spec · cross-vendor
  4. 2025
    Mem0 — Production memory benchmarks The ECAI 2025 paper put numbers on the question: how do you extract, consolidate, and retrieve facts from long conversations without dragging the full transcript every time. Memory became measurable. arXiv:2504.19413 · ECAI
  5. 2026
    Knovya AI Memory All four ideas composed into one product: a notes layer for humans, a knowledge graph for retrieval, an open MCP surface for every AI — and you keep the keys. ★ Knovya · production
First of its kind

Nobody gives AI a brain that is actually yours.

Every product solved memory inside its own walls. ChatGPT remembers you in ChatGPT. Claude remembers you in Claude. Mem0 remembers you for the agent your developer ships. None of them follow you across tools — and none of them give you the keys.

  • ChatGPT memory per-tool · hidden · OpenAI-only
  • Claude memory per-tool · file-based · Anthropic-only
  • Mem0 developer SDK · no UI · primitive
  • Perplexity Spaces docs only · no graph · no MCP
  • Notion AI in-app context · no MCP surface
  • ★ Knovya notes + graph + 33 MCP tools · shared
Surfaces

The same memory, everywhere you reach.

Write once in Knovya. Read it from Claude, ChatGPT, Cursor, or any other MCP-capable client. Same notes, same graph, four different surfaces.

Knovya editor where memory is written

The notes layer. Block editor, knowledge graph, Experience Envelope — the place memory accumulates. Every note here is reachable from every AI client below.

Claude conversation via knovya_search

Claude pulls context mid-conversation. Cites the notes by title, surfaces outcome badges, never invents a precedent that does not exist in your KB.

Cursor IDE decision logs in scope

Cursor's Composer reads your decision logs and architectural notes through the same MCP server. The repo it knows; the reasoning lives in Knovya.

MCP tool response the protocol layer

Underneath every surface above sits the same JSON. One server, OAuth-protected. Any MCP-capable client speaks it — Gemini, Copilot, Windsurf, Goose.

Frequently asked

A few honest answers.

What is AI memory?
AI memory is the persistent layer that lets a language model remember context across sessions. Without it, every conversation starts from zero. Knovya AI Memory composes notes, decisions and links into a shared brain — exposed to Claude, ChatGPT, Cursor and other MCP-capable clients so the same memory follows you across every tool you use.
Does Claude have memory now?
Yes — Anthropic activated Chat Memory for all Claude accounts in March 2026, plus a separate Memory Tool for the API. Both are Claude-only. Knovya AI Memory is different in scope: it is the shared layer that follows you to Claude and to ChatGPT and to Cursor, instead of living inside any single product.
Does ChatGPT have memory?
Yes. ChatGPT memory stores facts about you across sessions inside ChatGPT. It is per-tool — it does not travel to Claude or Cursor. Knovya holds the shared layer: the same notes, the same decisions, surfaced wherever you happen to be working.
How is Knovya different from Mem0?
Mem0 is a developer SDK — a memory primitive you build into an agent. Knovya is the knowledge-worker layer above that primitive: a notes editor, a knowledge graph and an Experience Envelope, with 33 MCP tools so any AI client can read and write structured memory without you writing code.
What MCP clients work with Knovya AI Memory?
Claude (Desktop and mobile), ChatGPT, Cursor, Gemini, GitHub Copilot, Windsurf and Goose are documented integrations. Any client that speaks the Model Context Protocol can connect — the open standard means new clients work the moment they ship.
Can I import my ChatGPT memory into Knovya?
Yes. Knovya accepts ChatGPT, Claude and Gemini memory exports as a starting set of notes — paste the export and Knovya structures it into your knowledge base. From that point forward your memory lives in one place and serves every AI you connect.
Is my AI memory encrypted?
Pro and Team plans include note-level end-to-end encryption (AES-256-GCM). Encrypted notes never leave your account in plaintext — they cannot be read on the server, embedded for search, or pulled into AI training. You decide which notes are MCP-readable and which stay sealed.
How many AI memory calls do I get on the Pro plan?
Pro includes 5,000 MCP tool calls per month — enough for daily use across Claude, ChatGPT and Cursor. Free includes 50 MCP calls per month, sufficient to connect one client and try the full memory layer with no credit card.

Connect every AI you use in 60 seconds.

Free includes 50 MCP calls per month — enough to connect one client and feel the difference. Pro lifts that to 5,000 with end-to-end encryption.

element 02 · Group I — AI