Hub · Connectors Integrations / Gemini · CLI · Code Assist · API

Gemini reads everything Google sees. Knovya holds what you wrote.

Personal Intelligence pulls Gmail, Drive, and Calendar into Gemini automatically — useful, scoped to what Google sees. Your decision log lives somewhere else. The customer interview from October. The ADR your team agreed on last quarter. The retro nobody re-reads but everybody references.

Knovya is the part Gemini reads about you. Same archive across Gemini CLI in your terminal, Code Assist Agent Mode in VS Code, and the Gemini API in your scripts. One MCP server, one ~/.gemini/settings.json entry, every Gemini surface that supports MCP.

3 Gemini surfaces · CLI + Code Assist + API
76K stars on Gemini CLI · five months
MCP native in CLI · API · VS Code Agent Mode
The install moment

One settings.json. Every Gemini surface that supports MCP.

Gemini CLI, Code Assist Agent Mode in VS Code, and the Gemini API SDK all read MCP servers from the same file. Add Knovya once; use it everywhere Gemini does.

Path I

Gemini CLI

npm install -g @google/gemini-cli, then edit ~/.gemini/settings.json and add the knovya entry above. Or run gemini mcp add knovya https://mcp.knovya.com/mcp to let the CLI write the config for you.

Path II

Code Assist Agent Mode

Install the Gemini Code Assist extension in VS Code. Agent Mode reads the same ~/.gemini/settings.json Gemini CLI uses — Knovya is already there. Open the chat, switch to Agent, and Gemini calls Knovya's tools mid-task.

Path III

Gemini API & SDK

Pass Knovya as an mcp_server in your Gemini API request — supported in the API and SDK since March 2026. google-genai for Python, @google/genai for Node. Combine MCP tool calls with Google Search grounding in a single request.

Behind the door

Same archive, 34 tools, organized for the conversation.

Gemini reaches into Discovery first when you ask “what did we decide about X last quarter?” Read tools fire when it cites a customer interview. Write tools save the chat as a structured decision log — with your confirmation. The same 34 tools that Claude and Cursor use; Gemini reads them the same way.

  • 0 Discovery 7 tools

    Where am I, what's here. Read-only orientation tools Gemini calls first — before guessing from training data. Long-context Gemini 3.1 Pro pairs especially well with knovya_context for whole-topic synthesis.

    pingworkspacehomepersonacontextsearchschema
  • 1 Read 7 tools

    Open the note. Trace a link. Pull an attachment. Where Gemini's million-token context shines — pull a whole ADR chain and ask Gemini to find the inconsistency, instead of summarizing it lossily.

    readexperiencememoryfoldershistorylinksattachments
  • 2 Write 4 tools

    Capture, edit, organize, archive. Save the Gemini conversation as a structured decision entry, tag it, file it. Code Assist Agent Mode confirms before destructive ops — your archive doesn't change without you seeing it.

    writeeditorganizedelete
  • 3 Mesh 11 tools

    When more than one model is in the room. Gemini in the CLI hands off a research finding to Claude in the next window — presence, channels, shared attention, consensus voting. Multi-model workflows without losing context.

    presencechannelscoordinateattentionthoughtscommitteeagentseventsmesh_adminpipelinenotifications
  • 5 Transform 5 tools

    Shape the archive. Convert a Deep Research session into a structured ADR. Apply your team's template. Export to Markdown for review. Where Knovya itself becomes a tool Gemini can wield mid-task.

    aitemplatesexportimportshare
The problem

Personal Intelligence reads your inbox. Knovya holds your reasoning.

In February 2026 Google shipped Personal Intelligence — Gemini reading Gmail, Drive, and Calendar automatically before each task. Useful, fast, scoped. But the scope is what Google sees — your email threads, your shared docs, your calendar invites.

The decision itself — the ADR your team agreed on, the customer interview that flipped the call, the retro from Q3 nobody re-reads but everybody references — those usually don't live in a Gmail thread. They live in a knowledge base. A Notion page Gemini can't read. A markdown file in a repo it doesn't index. The reasoning is somewhere else.

Knovya is the part Gemini reads about you — your structured archive, queryable by the same Gemini surfaces that Personal Intelligence already powers. Same Gemini, two memories: Google's services on one side, your own writing on the other.

↳ §5 Lineage · How Gemini got an MCP layer, in five steps

Lineage

How Gemini got an MCP layer.

Five steps from “Gemini is Google's chat AI” to “Gemini is an agent platform that reads any MCP server.”

  1. 2024

    Anthropic ships the Model Context Protocol.

    An open spec for AI ↔ tools — any client to any server, the same way browsers talk to web servers. Initially Claude-only, but the protocol is open from day one.

  2. 2025

    Gemini CLI launches — open source from the start.

    Google ships an open-source AI agent for the terminal. The repo crosses 76,000 stars in five months. Native MCP support is part of the design — the ~/.gemini/settings.json + mcpServers shape is identical to Claude Desktop's. By September 2025 FastMCP integration lands.

  3. 2026 · February

    Personal Intelligence + Gemini 3.1 Pro arrive.

    The consumer Gemini app starts pulling Gmail, Drive, and Calendar context automatically before each task — scoped to Google services. Powerful, but a different shape than user-defined MCP. The first signal that memory is the next frontier for Gemini.

  4. 2026 · March

    MCP support lands in the Gemini API and SDK.

    Developers can now pass an mcp_server directly in a Gemini API request and combine it with built-in function calling and Google Search grounding — all in a single, token-efficient call. The same week, Code Assist Agent Mode in VS Code ships MCP support (powered by Gemini CLI under the hood).

  5. 2026 · today

    Knovya is the MCP server Gemini calls about you.

    One ~/.gemini/settings.json, three Gemini surfaces (CLI, Code Assist, API), one archive of what you wrote. Gemini reads the workspace; Knovya holds the reasoning.

First mover

Six “memory” features. Six different things. Knovya is the one that travels with you.

Personal Intelligence

Gemini's own — automatically pulls Gmail, Drive, Calendar context before each task. Useful, scoped to Google services. Doesn't read your notion, your repo wiki, your decision log unless they live in Gmail / Drive.

Google services

Gemini Code Assist memory

Persistent memory for Gemini Code Assist on GitHub — remembers prior interactions in your repos. Useful for continuity across PR reviews; doesn't extend to your decision archive or cross-tool knowledge.

Repo-scoped

Gemini app session memory

Cross-session continuity inside the consumer Gemini app. Remembers preferences, recurring topics. Lives inside Gemini — doesn't show up in Cursor when you switch tools.

App-only

Notion AI

Notion's AI working over your Notion pages. Strong for summarization inside Notion; doesn't expose user-defined MCP bindings — you can't bring it to Gemini, Claude, or anyone else's chat.

Closed AI

Mem AI / second-brain apps

Memory-first capture tools with light AI. Solid for personal capture; narrow MCP surfaces or none, limited cross-AI portability. The reasoning lives in Mem; Gemini doesn't read it.

Capture-first

Knovya for Gemini

Structured archive — decisions, retros, ADRs, customer interviews — exposed as MCP. Gemini reads it from CLI, Code Assist, and the API SDK. Same archive Claude, ChatGPT, Cursor, and Copilot read. Travels with you across every model.

Cross-model

They stack. Personal Intelligence handles your inbox. Code Assist memory handles your repo. Knovya handles the reasoning that doesn't fit in either — the decisions you wrote down so future-you (or future-Gemini) could find them.

Surfaces

Four moments. One archive.

Where Knovya actually shows up across the Gemini stack — the terminal, the IDE, the SDK, the config.

Gemini CLI · terminal i.

“What did we decide about onboarding?”

$ gemini
› /mcp
filesystem · github · knovya (34)
› What did we decide about onboarding last quarter?
↳ Gemini reaches into your archive
knovya_search(query="onboarding decision Q3")
knovya_experience(topic="onboarding flow")
↳ 4 notes · 2 successful precedents · 1 cautionary
✓ Cites the ADR, the customer interviews, the retro you wrote.

The /mcp command lists every connected server. Knovya shows up alongside Filesystem and GitHub — Gemini calls it the same way.

VS Code · Code Assist Agent ii.

Agent Mode, same archive.

// Gemini Code Assist · Agent Mode
→ Refactor /api/auth using past decisions
↳ Plan
1. Search archive for auth ADRs
2. Read related retros
3. Apply changes to src/auth/
knovya_search(query="auth strategy ADR")
knovya_read(note_id="adr-014-auth-v2")
✓ Patch drafted. Cites ADR-014. Approve?

Agent Mode reads the same ~/.gemini/settings.json the CLI uses. Configure once; use it in your terminal and your editor.

Python · google-genai SDK iii.

MCP in the API request.

from google import genai

client = genai.Client()

response = client.models.generate_content(
    model="gemini-3.1-pro-preview",
    contents="Cite past auth decisions",
    config={
        "mcp_servers": [{
            "url": "https://mcp.knovya.com/mcp"
        }],
        "tools": [{"google_search": {}}]
    }
)

# MCP tool calls + Google Search grounding,
# in a single token-efficient request.

Available since March 2026. Combine MCP tools with native function calling and Google Search in one request — no separate orchestration layer.

~/.gemini/settings.json iv.

Single source of truth.

{
  "mcpServers": {
    "knovya": {
      "url": "https://mcp.knovya.com/mcp"
    },
    "github": {
      "command": "docker",
      "args": ["run", "-i", "github-mcp"]
    },
    "playwright": {
      "command": "npx",
      "args": ["@playwright/mcp"]
    }
  }
}

Project-local override at .gemini/settings.json for team-shared MCP setups. Same shape Claude Desktop uses — the connector ecosystem speaks one language.

Free 50 calls / month — enough to try Knovya in Gemini CLI
Pro · $15/mo 5,000 calls · encrypted notes
Team · $25/seat Unlimited · shared archive
See pricing
Questions, in advance

Gemini & Knovya — before you ask.

Does Gemini support MCP?

Yes — across most Gemini surfaces. Gemini CLI has shipped native MCP support since 2025 (76,000+ stars on GitHub in five months). Gemini Code Assist Agent Mode in VS Code added MCP support in early 2026 and is powered by the same Gemini CLI underneath. The Gemini API and SDK gained MCP tool calling in March 2026.

The one gap as of mid-2026: Gemini Code Assist for IntelliJ and other JetBrains IDEs does not yet support MCP servers, and the consumer Gemini app's third-party MCP surface is still rolling out. For everywhere else, ~/.gemini/settings.json is the doorway.

How do I add Knovya to Gemini CLI?

Edit ~/.gemini/settings.json (or .gemini/settings.json in a project root) and add a knovya entry under mcpServers with the URL https://mcp.knovya.com/mcp. Restart Gemini CLI; /mcp lists Knovya alongside any other servers, and the slash commands /mcp enable knovya and /mcp disable knovya work mid-session.

OAuth handles sign-in on first use — no API key in the file. About 90 seconds end-to-end.

Can I use Knovya with Gemini in Google AI Studio?

Indirectly today. Google AI Studio is where you prototype against the Gemini API; the API itself supports MCP tool calls (since March 2026) and you can pass Knovya as an mcp_server in your function-calling request.

The AI Studio web UI doesn't yet have a built-in “connect MCP server” button for third-party servers — that's an active area for Google. For now: build with the SDK in AI Studio's code surface, deploy with Knovya bound as an MCP tool.

Does Gemini have memory? How is it different from Knovya?

Two different things share the name. Gemini's Personal Intelligence (Feb 2026) automatically pulls context from Gmail, Drive, and Calendar — Google services, your Google account. Useful, but scoped to what Google sees.

Knovya is a structured archive you write to deliberately — decisions, retros, customer interviews, ADRs — searchable across every AI client, not just Gemini. Personal Intelligence reads your inbox; Knovya holds your reasoning. Most teams use both.

Does Gemini Code Assist support MCP servers?

In VS Code, yes — through Agent Mode, which is powered by Gemini CLI under the hood and reads the same ~/.gemini/settings.json. Configure Knovya once; it works in both your terminal and your editor.

In IntelliJ and other JetBrains IDEs, MCP support has not yet shipped (Google's documentation explicitly states this). For JetBrains workflows today, use Knovya through Gemini CLI in a terminal pane, or through Cursor / Claude Code if those are options.

Which Gemini models can I use with Knovya?

Any Gemini model with tool-calling support. Gemini 3.1 Pro (current preview flagship) and Gemini 3 Flash (default in the consumer app and CLI) both handle MCP tools well. Through Gemini CLI, the active model is set with /model — Knovya's tools are available to whichever you pick.

For high-context analysis like reading a long ADR chain, 3.1 Pro is the better fit; for quick lookups during coding, 3 Flash is faster and cheaper.

Is there a free tier for Knovya with Gemini?

Yes. Knovya Free includes 50 calls per month — shared across MCP, REST, and webhooks. Enough to try the full toolset across Gemini CLI and Code Assist, no credit card required. Pro at $15/month lifts the cap to 5,000 calls; Team at $25 per seat removes it entirely.

Gemini Code Assist's own free tier (6,000 code requests + 240 chat requests per day for individuals) layers on top — two free tiers, one combined workflow.

Open Gemini in your terminal.
Knovya in the conversation.

Gemini CLI for the morning prompts, Code Assist Agent Mode for the afternoon refactor, the API SDK for the script you'll cron tonight. One ~/.gemini/settings.json entry, same archive on the other side.

Or jump to the install moment, then see pricing.

Gemini CLI · Code Assist VS Code Agent Mode · Gemini API SDK · Free tier 50 calls/month · 14-day Pro trial.