Straight REST
curl + fetch + any HTTP client. Bearer token in the
Authorization header. Versioned with Knovya-Version. Nothing
to install — paste the snippet above and you’re done.
We built MCP integrations for Claude, ChatGPT, Cursor, Gemini, Copilot, and Windsurf — the ones almost everyone wants. This page is for the integration we haven’t pre-built yet. The Slack bot you’re prototyping. The Zapier flow with the weird trigger. The internal CRM sync. The React Native side project at 11 PM.
Same archive every connector reads from — exposed as a clean REST API at
api.knovya.com/v1 with 12 canonical webhook events for the push side. OAuth
2.1 + PKCE for user-facing apps, scoped API keys for server-to-server. Rich
payloads — the changed object, not just an entity ID.
curl. One JSON. One archive. Three ways to reach the same data — straight cURL for the prototype, an SDK for the production app, a webhook subscription for the push side. Pick the path that fits your runtime; the archive is the same on the other side.
# Create a note from the command line curl -X POST https://api.knovya.com/v1/notes \ -H "Authorization: Bearer $KNOVYA_KEY" \ -H "Knovya-Version: 2026-05-01" \ -H "Content-Type: application/json" \ -d '{ "title": "Architecture decision log — auth v2", "content_md": "## Decision\n\nMove to OAuth 2.1...", "folder": "Decisions", "tags": ["auth", "adr"], "metadata": { "type": "decision", "status": "active" } }' # Response: 201 Created # { "id": "note_01H...", "url": "...", ...}
curl + fetch + any HTTP client. Bearer token in the
Authorization header. Versioned with Knovya-Version. Nothing
to install — paste the snippet above and you’re done.
pip install knovya · npm install @knovya/sdk ·
go get knovya.com/sdk. Typed clients for Python, TypeScript, and Go —
OAuth, retries, rate-limit backoff handled for you.
POST /v1/webhooks with your URL and an events filter. Knovya
pushes signed POSTs the moment something fires. Rich payloads — no extra
fetch round-trip just to find out what changed.
Each MCP tool maps to a REST endpoint and (where it makes sense) a webhook event. The archive has one shape; the protocols are how you reach it. Same auth, same scopes, same rate-limit budget — pick the one that fits your runtime.
Where am I, what’s here. Read-only orientation calls — workspace info, schema, context for a topic. The REST equivalents of the MCP tools your AI client uses first.
GET /v1/pingGET /v1/workspaceGET /v1/homeGET /v1/personaGET /v1/contextGET /v1/searchGET /v1/schema Open the note. Trace a link. Pull an attachment. Get a single note, follow the knowledge graph, fetch experience envelopes, list folders. Every read returns rich structured content — not just IDs.
GET /v1/notes/:idGET /v1/experienceGET /v1/memoryGET /v1/foldersGET /v1/notes/:id/historyGET /v1/linksGET /v1/attachments Create, edit, organize, archive. POST/PATCH/DELETE on notes — every mutation fires a corresponding webhook event (note.created, note.updated, note.deleted, etc.) with rich payloads, so your subscribers don’t need a follow-up fetch.
POST /v1/notesPATCH /v1/notes/:idPOST /v1/organizeDELETE /v1/notes/:idnote.creatednote.updatednote.deletednote.archived Multi-agent and collaboration primitives. Presence, channels, consensus voting, attention heatmaps. Where REST meets coordination — and where the share-invitation webhook events fire (invited, accepted, declined, revoked).
GET /v1/presencePOST /v1/channelsPOST /v1/coordinatePOST /v1/agents/messagesnote.share_invitednote.share_acceptednote.share_declinednote.share_revoked Shape the archive. AI-assisted transforms, template application, export to Markdown / DOCX / PDF, sharing controls. Where the archive becomes a primitive your integration can reshape.
POST /v1/ai/transformGET /v1/templatesPOST /v1/exportPOST /v1/importPOST /v1/share Six AI clients pre-built. The seventh integration is the one you’re building tonight.
We shipped MCP for Claude, ChatGPT, Cursor, Gemini, Copilot, Windsurf — the connectors almost everyone wants. Six doorways, one archive, ninety-second installs. Most knowledge workers will never need anything else.
But you’re not most knowledge workers. You’re building a Slack bot that watches an internal channel. A Zapier flow with a weird trigger. A React Native app for the product team. An internal dashboard that pulls from three tools and lands in a fourth. The integration you need is specific to your stack, and it’s never going to ship as a pre-built doorway.
The REST API is the long tail of integrations — for the ones we’ll never anticipate. Same archive every connector reads from. Same auth, same scopes, same data. Plus 12 signed webhook events for the push side, so you don’t have to poll.
↳ §5 Lineage How knowledge bases got their REST + webhook layer
Five steps from “knowledge bases are closed apps” to “your archive is a primitive your AI can build on.”
REST endpoints for pages, databases, blocks. The era of “knowledge bases are closed apps” ends. Internal tokens, OAuth for marketplace apps, a versioned spec — the template every knowledge tool would follow.
Mem ships an API. Obsidian opens its plugin surface. Reflect, Roam, Logseq — every knowledge tool becomes integrable. But the primitives are still about files and pages, not reasoning, decisions, or the shape of memory.
An open spec for AI ↔ tools. Knowledge becomes portable across models. But MCP alone doesn’t cover the long tail — Slack bots, mobile apps, internal CRMs still need REST and webhooks.
The top feature request for three years lands. Initial event types cover pages and databases, with sparse payloads — entity ID + metadata only, fetch the rest via follow-up API call. The push era for knowledge tools begins.
Same archive every AI connector reads from, exposed three ways: REST for synchronous calls, rich-payload webhooks for the push side (12 canonical events, HMAC-signed, subscriber-filtered), and MCP for AI-native clients. The first knowledge API designed for the AI builder, not just the page builder.
REST endpoints for pages, databases, blocks. Webhooks shipped 2026-03 with sparse payloads (entity ID only — fetch the rest separately). The template every knowledge tool followed; not designed for AI workflows.
Pages & DBLocal-first plugin surface — runs inside the Obsidian app, not as a remote API. Powerful for desktop workflows, but no webhooks, no remote access, no AI binding. Different shape entirely.
Local pluginNotes-as-API with light AI hooks. Closed beta historically; narrow primitive set. Good for capture and retrieval, limited for graph queries or webhook-driven flows.
Notes-as-APISmaller knowledge tools with community wrappers and partial APIs. Webhook support thin to nonexistent. Useful for niche flows; not a primitive an AI agent can build on.
Niche / partialNotion’s own AI surface — works inside Notion, doesn’t expose user-defined MCP bindings. Closed to the connector ecosystem; you can’t bring your own AI tooling to it.
Closed AIREST + 12 canonical webhook events with rich payloads (full data, not just IDs) + subscriber-side event filter at registration + native MCP binding. The first knowledge-base API designed for the AI builder — same archive, three protocols, OAuth 2.1 throughout.
Graph + MemoryThe differentiator is shape, not surface. Notion’s API is excellent at pages and databases; that’s not what we are. Knovya’s primitives — knowledge graph, memory, NoteRank, experience envelopes — were designed from day one for an AI agent to query. The REST API exposes those primitives as HTTP. Webhooks push them. MCP binds them. Same archive, built for a different question.
Where the API actually shows up — the cURL response, the SDK call, the webhook payload, the reference doc.
{
"results": [
{ "id":"note_01H9Z...", "title":"ADR-014 · Auth v2", "score":0.94 },
{ "id":"note_01H7K...", "title":"OAuth migration retro", "score":0.81 }
],
"next_cursor": "eyJvZmZzZXQ..."
} Standard REST shapes. Cursor pagination. Rate-limit headers on every response. Nothing surprising — just a clean knowledge API where you’d expect one.
from knovya import Knovya client = Knovya(api_key=os.environ["KNOVYA_KEY"]) notes = client.search("auth strategy", limit=10) for n in notes: print(n.title, n.score) # OAuth, retries, rate-limit backoff: # all handled by the SDK.
Typed clients for Python, TypeScript, Go. OAuth flow, exponential backoff, retry on 429s — the SDK handles them. You write the business logic.
// X-Knovya-Signature: sha256=a3f2e8b1... // X-Knovya-Event: note.share_invited // X-Knovya-Delivery: evt_01HZK... { "event_type": "note.share_invited", "event_type_legacy": "davet.gonderildi", "timestamp": "2026-05-04T11:30:00Z", "workspace_id": "ws_01HZ...", "data": { "invitation_id": "inv_01H...", "note_id": "note_01H...", "note_title": "Q4 Roadmap", "inviter": { "id": "u_...", "name": "Mehmet Y." }, "invited_email": "[email protected]", "permission": "viewer", "expires_at": "2026-06-04T11:30:00Z" } }
The full data block, signed with HMAC-SHA256, dual-emitted for legacy subscribers. No follow-up GET to figure out what changed — it’s already there.
Full OpenAPI 3.1 spec, hosted reference, copy-paste code snippets in five languages. Same source-of-truth your SDKs are generated from.
For Claude, ChatGPT, Cursor, Gemini — and Copilot, Windsurf — the connector is one paste. Use the API for the integration we couldn’t ship, the MCPs for the ones we did.
The morning conversation. Reads your decision log. Anthropic-certified install path.
The midnight draft. Past sessions, saved quotes, half-finished threads — already there.
When the same archive needs to live inside the IDE. Composer cites your ADRs.
Long-context research. Synthesis written back to your archive instead of trapped in a chat.
Yes. The Knovya REST API exposes notes, knowledge graph, memory, search, folders, and agents through standard HTTP endpoints under api.knovya.com/v1. Authentication uses OAuth 2.1 with PKCE for user-facing apps, and scoped API keys for server-to-server use.
The MCP server and the REST API back the same archive — anything you can do through Claude or Cursor, you can do through curl.
Yes — 12 canonical webhook events covering note lifecycle (note.created, note.updated, note.deleted, note.archived, note.pinned, note.locked, note.moved, note.shared) and share invitations (note.share_invited, note.share_accepted, note.share_declined, note.share_revoked).
Payloads are HMAC-SHA256 signed and rich — they include the relevant data block, not just an entity ID. Subscribers register an events filter at creation time, so you only receive what you asked for.
Notion's API treats your workspace as pages and databases. Knovya treats it as a knowledge graph with built-in memory, NoteRank, and AI-native primitives — search by meaning, recall past precedents, query the link graph.
Webhook payloads are rich (the changed data, not just the ID) and subscribers can filter at registration. And every endpoint is also reachable as an MCP tool — no other knowledge-base API binds to Claude, ChatGPT, or Cursor that way.
Yes. MCP is the AI-native path; REST + webhooks is the universal path. For Slack bots, Zapier flows, custom CRM syncs, mobile apps, internal dashboards, or anything that doesn't speak MCP, the REST API is the answer.
The two stacks share the same auth, the same archive, and the same rate-limit budget — pick the one that fits your runtime. You can also mix them: an MCP-bound AI client on the read side, a REST cron job on the write side.
Free: 50 calls per month (shared with MCP). Pro: 5,000 calls per month with burst tolerance. Team: unlimited.
Webhooks don't consume your call budget — incoming POSTs are pushed by Knovya, not pulled by your integration. Standard rate-limit headers (X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset, Retry-After) are returned on every response.
Two paths. For server-to-server use (cron jobs, scripts, internal tools), create a scoped API key from Settings → API & Webhooks — pick the scopes you need (notes:read, notes:write, webhooks:manage, etc.) and rotate at any time.
For user-facing apps and integrations published to others, use OAuth 2.1 with PKCE — same standard Anthropic and OpenAI use for their connector flows. Both paths feed the same archive, the same scope system, the same rate limits.
Twelve canonical events today: note.created, note.updated, note.deleted, note.archived, note.pinned, note.locked, note.moved, note.shared, note.share_invited, note.share_accepted, note.share_declined, note.share_revoked.
More on the roadmap (agent.message, memory.recall, knowledge_graph.link). Each event ships rich data — the changed object, the actor, the timestamp — and a legacy event_type alias for older subscribers, so consumers can migrate gradually.
Same archive every connector reads from — exposed as a clean REST API, a webhook stream, and a typed SDK. The Slack bot, the Zapier flow, the React Native app, the internal CRM sync — whatever you’re building tonight, the data’s already there.
Or jump to the first call, then see pricing.
OAuth 2.1 + PKCE · scoped API keys · HMAC-signed webhooks · Free tier 50 calls/month · Pro 5,000 · 14-day Pro trial.