Gemini CLI
npm install -g @google/gemini-cli, then edit ~/.gemini/settings.json
and add the knovya entry above. Or run
gemini mcp add knovya https://mcp.knovya.com/mcp
to let the CLI write the config for you.
Personal Intelligence pulls Gmail, Drive, and Calendar into Gemini automatically — useful, scoped to what Google sees. Your decision log lives somewhere else. The customer interview from October. The ADR your team agreed on last quarter. The retro nobody re-reads but everybody references.
Knovya is the part Gemini reads about you. Same archive across Gemini CLI in your
terminal, Code Assist Agent Mode in VS Code, and the Gemini API in your scripts. One MCP
server, one ~/.gemini/settings.json entry,
every Gemini surface that supports MCP.
settings.json. Every Gemini surface that supports MCP. Gemini CLI, Code Assist Agent Mode in VS Code, and the Gemini API SDK all read MCP servers from the same file. Add Knovya once; use it everywhere Gemini does.
{
"mcpServers": {
"knovya": {
"url": "https://mcp.knovya.com/mcp"
},
"github": {
"command": "docker",
"args": ["run", "-i", "github-mcp"]
}
}
}
// Same file for Gemini CLI + Code Assist Agent Mode.
// Project-local override: .gemini/settings.json
// OAuth handles sign-in. Run /mcp to verify. npm install -g @google/gemini-cli, then edit ~/.gemini/settings.json
and add the knovya entry above. Or run
gemini mcp add knovya https://mcp.knovya.com/mcp
to let the CLI write the config for you.
Install the Gemini Code Assist extension in VS Code. Agent Mode reads the same
~/.gemini/settings.json Gemini CLI uses — Knovya is already there. Open
the chat, switch to Agent, and Gemini calls Knovya's tools mid-task.
Pass Knovya as an mcp_server in your Gemini API request — supported in
the API and SDK since March 2026. google-genai for Python,
@google/genai for Node. Combine MCP tool calls with Google Search grounding
in a single request.
Gemini reaches into Discovery first when you ask “what did we decide about X last quarter?” Read tools fire when it cites a customer interview. Write tools save the chat as a structured decision log — with your confirmation. The same 34 tools that Claude and Cursor use; Gemini reads them the same way.
Where am I, what's here. Read-only orientation tools Gemini calls first — before guessing from training data. Long-context Gemini 3.1 Pro pairs especially well with knovya_context for whole-topic synthesis.
pingworkspacehomepersonacontextsearchschema Open the note. Trace a link. Pull an attachment. Where Gemini's million-token context shines — pull a whole ADR chain and ask Gemini to find the inconsistency, instead of summarizing it lossily.
readexperiencememoryfoldershistorylinksattachments Capture, edit, organize, archive. Save the Gemini conversation as a structured decision entry, tag it, file it. Code Assist Agent Mode confirms before destructive ops — your archive doesn't change without you seeing it.
writeeditorganizedelete When more than one model is in the room. Gemini in the CLI hands off a research finding to Claude in the next window — presence, channels, shared attention, consensus voting. Multi-model workflows without losing context.
presencechannelscoordinateattentionthoughtscommitteeagentseventsmesh_adminpipelinenotifications Shape the archive. Convert a Deep Research session into a structured ADR. Apply your team's template. Export to Markdown for review. Where Knovya itself becomes a tool Gemini can wield mid-task.
aitemplatesexportimportshare Personal Intelligence reads your inbox. Knovya holds your reasoning.
In February 2026 Google shipped Personal Intelligence — Gemini reading Gmail, Drive, and Calendar automatically before each task. Useful, fast, scoped. But the scope is what Google sees — your email threads, your shared docs, your calendar invites.
The decision itself — the ADR your team agreed on, the customer interview that flipped the call, the retro from Q3 nobody re-reads but everybody references — those usually don't live in a Gmail thread. They live in a knowledge base. A Notion page Gemini can't read. A markdown file in a repo it doesn't index. The reasoning is somewhere else.
Knovya is the part Gemini reads about you — your structured archive, queryable by the same Gemini surfaces that Personal Intelligence already powers. Same Gemini, two memories: Google's services on one side, your own writing on the other.
↳ §5 Lineage · How Gemini got an MCP layer, in five steps
Five steps from “Gemini is Google's chat AI” to “Gemini is an agent platform that reads any MCP server.”
An open spec for AI ↔ tools — any client to any server, the same way browsers talk to web servers. Initially Claude-only, but the protocol is open from day one.
Google ships an open-source AI agent for the terminal. The repo crosses 76,000 stars in five months. Native MCP support is part of the design — the ~/.gemini/settings.json + mcpServers shape is identical to Claude Desktop's. By September 2025 FastMCP integration lands.
The consumer Gemini app starts pulling Gmail, Drive, and Calendar context automatically before each task — scoped to Google services. Powerful, but a different shape than user-defined MCP. The first signal that memory is the next frontier for Gemini.
Developers can now pass an mcp_server directly in a Gemini API request and combine it with built-in function calling and Google Search grounding — all in a single, token-efficient call. The same week, Code Assist Agent Mode in VS Code ships MCP support (powered by Gemini CLI under the hood).
One ~/.gemini/settings.json, three Gemini surfaces (CLI, Code Assist, API), one archive of what you wrote. Gemini reads the workspace; Knovya holds the reasoning.
Gemini's own — automatically pulls Gmail, Drive, Calendar context before each task. Useful, scoped to Google services. Doesn't read your notion, your repo wiki, your decision log unless they live in Gmail / Drive.
Google servicesPersistent memory for Gemini Code Assist on GitHub — remembers prior interactions in your repos. Useful for continuity across PR reviews; doesn't extend to your decision archive or cross-tool knowledge.
Repo-scopedCross-session continuity inside the consumer Gemini app. Remembers preferences, recurring topics. Lives inside Gemini — doesn't show up in Cursor when you switch tools.
App-onlyNotion's AI working over your Notion pages. Strong for summarization inside Notion; doesn't expose user-defined MCP bindings — you can't bring it to Gemini, Claude, or anyone else's chat.
Closed AIMemory-first capture tools with light AI. Solid for personal capture; narrow MCP surfaces or none, limited cross-AI portability. The reasoning lives in Mem; Gemini doesn't read it.
Capture-firstStructured archive — decisions, retros, ADRs, customer interviews — exposed as MCP. Gemini reads it from CLI, Code Assist, and the API SDK. Same archive Claude, ChatGPT, Cursor, and Copilot read. Travels with you across every model.
Cross-modelThey stack. Personal Intelligence handles your inbox. Code Assist memory handles your repo. Knovya handles the reasoning that doesn't fit in either — the decisions you wrote down so future-you (or future-Gemini) could find them.
Where Knovya actually shows up across the Gemini stack — the terminal, the IDE, the SDK, the config.
The /mcp command lists every connected server. Knovya shows up alongside
Filesystem and GitHub — Gemini calls it the same way.
Agent Mode reads the same ~/.gemini/settings.json the CLI uses. Configure
once; use it in your terminal and your editor.
from google import genai client = genai.Client() response = client.models.generate_content( model="gemini-3.1-pro-preview", contents="Cite past auth decisions", config={ "mcp_servers": [{ "url": "https://mcp.knovya.com/mcp" }], "tools": [{"google_search": {}}] } ) # MCP tool calls + Google Search grounding, # in a single token-efficient request.
Available since March 2026. Combine MCP tools with native function calling and Google Search in one request — no separate orchestration layer.
{
"mcpServers": {
"knovya": {
"url": "https://mcp.knovya.com/mcp"
},
"github": {
"command": "docker",
"args": ["run", "-i", "github-mcp"]
},
"playwright": {
"command": "npx",
"args": ["@playwright/mcp"]
}
}
}
Project-local override at .gemini/settings.json for team-shared MCP setups.
Same shape Claude Desktop uses — the connector ecosystem speaks one language.
The same Knovya Gemini reads from is also one paste away in Claude, ChatGPT, Cursor, and any custom integration via REST. Whichever model catches your next thought, it's reading from the same archive.
The morning conversation. Reads your decision log. Anthropic-certified install path.
Walk through ClaudeThe midnight draft. Past sessions, saved quotes, half-finished threads — already there.
Walk through ChatGPTWhen the same archive needs to live inside the IDE. Composer cites your ADRs.
Walk through CursorREST + webhooks. For the integration we haven't pre-built yet — Slack bots, Zapier, custom agents.
Walk through the APIYes — across most Gemini surfaces. Gemini CLI has shipped native MCP support since 2025 (76,000+ stars on GitHub in five months). Gemini Code Assist Agent Mode in VS Code added MCP support in early 2026 and is powered by the same Gemini CLI underneath. The Gemini API and SDK gained MCP tool calling in March 2026.
The one gap as of mid-2026: Gemini Code Assist for IntelliJ and other JetBrains IDEs does not yet support MCP servers, and the consumer Gemini app's third-party MCP surface is still rolling out. For everywhere else, ~/.gemini/settings.json is the doorway.
Edit ~/.gemini/settings.json (or .gemini/settings.json in a project root) and add a knovya entry under mcpServers with the URL https://mcp.knovya.com/mcp. Restart Gemini CLI; /mcp lists Knovya alongside any other servers, and the slash commands /mcp enable knovya and /mcp disable knovya work mid-session.
OAuth handles sign-in on first use — no API key in the file. About 90 seconds end-to-end.
Indirectly today. Google AI Studio is where you prototype against the Gemini API; the API itself supports MCP tool calls (since March 2026) and you can pass Knovya as an mcp_server in your function-calling request.
The AI Studio web UI doesn't yet have a built-in “connect MCP server” button for third-party servers — that's an active area for Google. For now: build with the SDK in AI Studio's code surface, deploy with Knovya bound as an MCP tool.
Two different things share the name. Gemini's Personal Intelligence (Feb 2026) automatically pulls context from Gmail, Drive, and Calendar — Google services, your Google account. Useful, but scoped to what Google sees.
Knovya is a structured archive you write to deliberately — decisions, retros, customer interviews, ADRs — searchable across every AI client, not just Gemini. Personal Intelligence reads your inbox; Knovya holds your reasoning. Most teams use both.
In VS Code, yes — through Agent Mode, which is powered by Gemini CLI under the hood and reads the same ~/.gemini/settings.json. Configure Knovya once; it works in both your terminal and your editor.
In IntelliJ and other JetBrains IDEs, MCP support has not yet shipped (Google's documentation explicitly states this). For JetBrains workflows today, use Knovya through Gemini CLI in a terminal pane, or through Cursor / Claude Code if those are options.
Any Gemini model with tool-calling support. Gemini 3.1 Pro (current preview flagship) and Gemini 3 Flash (default in the consumer app and CLI) both handle MCP tools well. Through Gemini CLI, the active model is set with /model — Knovya's tools are available to whichever you pick.
For high-context analysis like reading a long ADR chain, 3.1 Pro is the better fit; for quick lookups during coding, 3 Flash is faster and cheaper.
Yes. Knovya Free includes 50 calls per month — shared across MCP, REST, and webhooks. Enough to try the full toolset across Gemini CLI and Code Assist, no credit card required. Pro at $15/month lifts the cap to 5,000 calls; Team at $25 per seat removes it entirely.
Gemini Code Assist's own free tier (6,000 code requests + 240 chat requests per day for individuals) layers on top — two free tiers, one combined workflow.
Gemini CLI for the morning prompts, Code Assist Agent Mode for the afternoon refactor, the
API SDK for the script you'll cron tonight. One
~/.gemini/settings.json entry, same archive
on the other side.
Or jump to the install moment, then see pricing.
Gemini CLI · Code Assist VS Code Agent Mode · Gemini API SDK · Free tier 50 calls/month · 14-day Pro trial.