Semantic recall
"What did I write about X?"
The default mode. Knovya searches the workspace by meaning — not just keywords — and returns the most relevant notes ranked by similarity, including archived ones if the agent asks.
Stateless agents start every session blank. Knovya is the layer they read from and write to over MCP — a notes app for you, a long-term memory for them. Six agents share the same database. Four recall modes. One workspace.
§ 2 — The Memory Layer
Pick an agent, pick a recall mode, watch the conversation land. Each mode answers a different shape of question — semantic recall for "what did I write about X," neighborhood walks for "what's around this decision," temporal queries for "what was true two months ago," and memory-health audits for "what's gone stale." Same database, four lenses.
§ 3 — The four recall modes
An agent that only does vector similarity gets you halfway. Real memory needs four shapes of recall — each built for a different cognitive job. Knovya ships all four through MCP, each with its own retrieval path through the same workspace.
3a · Four recall modes
How agents ask Knovya for memory.
"What did I write about X?"
The default mode. Knovya searches the workspace by meaning — not just keywords — and returns the most relevant notes ranked by similarity, including archived ones if the agent asks.
"What's around this decision?"
Pick an anchor note. Knovya returns its typed link neighbors plus second-degree relatives that share folders or tags — all wrapped with their epistemic role and the agent that created each link.
"What was true two months ago?"
Walk the supersedes chain. Knovya returns the notes that were current at the requested date and marks the rest with their replacement state — historical context, recoverable.
"What's gone stale?"
Audit the workspace — which folders are dense and well-cited, which are orphaned and probably stale, where conflicting versions need a supersedes link. Maintenance, surfaced.
3b · Six agent connectors
First-class MCP connectors today.
Claude
Desktop · Code
ChatGPT
Custom GPT · MCP apps
Cursor
Editor · agent mode
Gemini
CLI · Workspace
Copilot
GitHub · VS Code
Windsurf
Codeium · agent mode
§ 4 — The problem
Scene 01
ChatGPT and Claude ship a "memory" feature now — but each is locked to its own platform. Move to a different agent and the context is gone. Memory inside one tool isn't memory.
Scene 02
Mem0, Letta, Zep, LangMem, Supermemory are all developer SDKs — infrastructure you embed inside a custom agent. There's no UI for the human, no notes app at the end of the wire. Memory you can't read is half the system.
Scene 03
Vertex AI Memory Bank, Bedrock memory primitives — cloud-platform offerings with steep ecosystem lock-in. Useful if you're already inside Google or AWS. Useless if you're not.
Each of these solves a slice. None of them solves "the human takes notes; the agents read them" — the obvious thing. Knovya is the obvious thing: a notes app for you, the same database read over MCP by every agent you connect.
§ 5 — Lineage
Agentic memory as a research field is barely three years old. It moved fast because the underlying problem was unmissable: every agent demo ended the same way — "and then it forgot." Each milestone is a step toward agents that don't.
2018—2022
GPT-2, GPT-3, early Claude. The model "remembers" only what fits in the prompt. Sessions are blank-slate by definition. Memory isn't a feature; it's an absence.
2023
UC Berkeley's MemGPT paper proposes treating the LLM like an operating system that pages information between a working context, a recall store, and an archival store. Now Letta — the influential framing of the field.
2024
ChatGPT Memory rolls out. Anthropic ships Claude memory. Each agent gets a private, vendor-locked memory layer — useful inside one tool, invisible everywhere else.
2025
The Mem0 paper at ECAI 2025 benchmarks ten memory approaches against the LOCOMO long-conversation dataset. Zep, LangMem, Supermemory establish the developer-SDK category. Memory becomes a real research field.
2026
A consumer notes app that doubles as the agent memory layer. MCP-native. Six agents share one workspace. Four recall modes. The human writes notes; the agents read them. Same database.
§ 6 — First mover
ChatGPT memory · Claude memory
Vendor-locked. Private to one platform. The memory inside ChatGPT can't help Cursor; the memory inside Claude can't help ChatGPT. One agent, one silo.
Mem0 · Letta · Zep · LangMem · Supermemory
Developer SDKs — infrastructure you embed inside a custom agent. No UI for the human. The memory works, but the only person who can see it is the engineer who shipped it.
Vertex AI Memory Bank · Bedrock memory
Cloud-platform primitives. Powerful inside Google or AWS, invisible outside. Useful if your stack already lives there; otherwise, an integration tax most teams won't pay.
Knovya — Agentic Memory
A notes app for the human, the same database read by every agent over MCP. Six connectors. Four recall modes. One workspace. The notes you write are the memory the agents read — no second system to learn.
§ 7 — Surfaces
Connecting an agent is one keystroke. Reading what an agent cited is one click. Auditing what an agent saw is one panel. The memory is a feature of the workspace, not a hidden API.
Surface 01 · Quick connect
Open the command palette, type the agent's name, hit Enter. Knovya issues the OAuth handshake, the MCP scopes get presented, and the agent is reading your workspace within seconds.
Surface 02 · Provenance
Every backlink an agent creates wears its badge. You can tell at a glance which connection came from Claude during yesterday's coding session and which came from Cursor while it was wiring up tests. Provenance travels with the citation.
Q3 hiring plan
Senior IC headcount sequenced after the offsite — three slots open, prioritized by team capacity gaps surfaced in the OKR review.
cited by: Claude · OKR review session Cursor · capacity model ChatGPT · interview rubric
Surface 03 · Time
Ask the workspace as it was. The supersedes chain is a first-class graph edge in Knovya — agents don't search through deleted versions, they read history as a lineage. Current notes stay current; replaced notes stay accessible.
"What was our pricing model two months ago?" as_of 2026-03-04
Surface 04 · Audit
A workspace dashboard for the maintenance no one does. Orphaned notes, conflict pairs missing supersedes links, folders that haven't been touched in months — surfaced, ranked, one-click resolvable.
§ 8 — Bonded with
Agentic Memory isn't a separate engine — it's the bundle. Hybrid Search powers semantic recall. NoteRank weighs which note matters most. Backlinks define the neighborhood. Experience Envelope walks the supersedes chain.
Powers semantic recall — the four-engine retrieval stack agents use to find the relevant notes.
Weighs which note matters most — agents see the same prioritized order a human would.
Walks supersedes chains and assembles the layered context behind every recalled note.
Defines the neighborhood — typed bidirectional edges feed the box recall mode directly.
Agentic memory is the persistent context layer that lets AI agents remember across sessions. Stateless models forget everything between conversations. A memory layer keeps the relevant facts, decisions, and prior reasoning available so the agent can pick up where it left off — and so the next session begins informed rather than blank. The category emerged in 2023-2024 with MemGPT/Letta and matured in 2025 with Mem0's ECAI paper and the LOCOMO benchmark.
Mem0, Letta, Zep, LangMem, Supermemory are developer SDKs — infrastructure you embed inside a custom agent to give it memory. Knovya is the memory itself: a notes app you actually use, with a UI built for humans, that the same agents can read from and write to over MCP. Your second brain is the agent's long-term memory. No separate schema, no separate database, no two-system sync.
Six first-class connectors today: Claude (Desktop and Code), ChatGPT (custom GPT plus apps that speak MCP), Cursor, Gemini, Copilot, and Windsurf. Plus a custom MCP integration for any agent that speaks the protocol. All six see the same notes, the same backlinks, the same supersedes chain — there is one workspace, not six.
Yes — by design. A note Claude writes during a coding session is a note Cursor can read tomorrow and ChatGPT can cite next week. The same applies to backlinks, mentions, and supersedes relationships. Agent badges record which agent created which connection, so provenance travels with the memory. Workspace permissions still apply — agents only see what their authenticated user can see.
Temporal recall lets agents query the workspace as of a specific date — "what did we know about pricing two months ago?" — and walks the supersedes chain to mark which notes were current at that moment versus which have since been replaced. The replacement chain is preserved on every supersedes link, so historical context is recoverable without searching through deleted versions.
Yes. Every recall an agent performs is logged in the activity panel for that workspace — the query, the recall mode, and the notes returned. Every connection an agent creates carries an agent badge that names the source. You can revoke an agent's access in one click, and you can audit a session after the fact to see exactly which notes informed which response.
Encrypted notes are excluded from agent recall by default — they never appear in any MCP response until the user explicitly grants the agent access. Workspace traffic between agents and Knovya runs over TLS with OAuth 2.1 + PKCE authentication and granular per-tool scopes; an agent that has read access can't write unless the user issued a write scope.
Stop maintaining two systems — your notes and your agent's memory. They're the same thing. Write one, read from both. Six agents, one workspace, four ways to ask.