AI Skills — composable workflows that read your knowledge.

Most AI workflows live outside your notes. Skills bring them inside — fifty built-in workflows that read folders, pull precedents from your memory, and write structured output back into your knowledge base. Compatible with the Agent Skills open standard and callable as MCP tools from Claude or Cursor. Free ships every built-in skill; Pro unlocks ten custom skills of your own.

Built-in skills
50+
Skill anatomy layers
4
Standard compatible
Open
Skills
Experiment 01 · The Library

Browse the library. Watch a skill run.

Skills are reusable AI workflows — packaged with a slug, a system prompt, and a scope. Pick one. See its anatomy. Watch it produce a structured note.

Skills library Click any skill to preview →

Free — every built-in skill, no custom skills · Pro — 10 custom skills, MCP-callable · Team — 50 custom skills per workspace. See pricing.

All four · what makes a skill a skill

Anatomy of a skill — four layers.

Every Knovya skill packages four things. The same shape works for built-in skills, your own custom skills, and skills imported from the Agent Skills open standard.

Layer I

Definition

3 fields
  1. 01
    Slug & description A short slug like weekly-decision-log and a one-sentence description. The agent reads only this metadata at startup — full content loads on demand. Token-efficient by design.
  2. 02
    System prompt The instruction body. What the AI should do, in what tone, with which constraints. Stored as Markdown — versioned, diff-able, auditable.
  3. 03
    Question hints Optional prompts the AI asks before running — "which week?" or "include drafts?" Conversation collects context; the skill only fires when ready.
Layer II

Inputs

3 scopes
  1. 04
    Note scope Current note, an entire folder, a tag, a date range, or a hand-picked set. The skill reads only what you scope it to — no leaks, no spillover.
  2. 05
    Knowledge graph access Skills can follow links — depend_on, references, supersedes. A "PRD generator" can pull the linked decision notes automatically. The graph compounds.
  3. 06
    Memory bridge Skills can read your AI memory layer and the Experience Envelope — past precedents grouped by outcome. Context becomes free.
Layer III

Execution

2 controls
  1. 07
    Model tier Fast tier for grammar fixes and short summaries. Quality tier for plans, decisions, long-form. The skill author picks once; runs stay consistent.
  2. 08
    Trigger surface Run from the slash menu. Run from the AI Drawer. Run from MCP — Claude or Cursor calls your skill as if it were a native tool. One definition, four surfaces.
Layer IV

Output

3 shapes
  1. 09
    Output hints Heading structure, block types, tone. The skill specifies the shape of the answer so every run lands consistent — not freeform paragraphs every time.
  2. 10
    Destination A new note in a chosen folder, an edit to the active note, an appended section, or a streamed reply in the drawer. The result lands where the workflow actually lives.
  3. 11
    Provenance & audit Every output is tagged with the skill that produced it, the input scope, and the run timestamp. Reproducible — if the skill changes, old runs stay anchored to the old version.

Built-in skills ship the four layers preconfigured. Custom skills let you set every layer yourself in Settings → Skills → New. Imported Agent Skills (Anthropic, Cursor, Continue) inherit their definition layer and pick up Knovya's input + output binding — same standard, different binding.

Every prompt is rewritten from scratch
and your knowledge stays trapped outside the editor.

You found the right way to summarize a meeting. You wrote the perfect prompt for extracting action items. By Friday, both prompts are gone — copied into ChatGPT, then closed.

Notion AI templates are fixed. Zapier connects apps but cannot read your knowledge. Raw GPTs live in another tab. The work that should compound — keeps starting over.

The cost
Survey of knowledge workers using AI assistants in 2026: ~70% rewrite the same prompts every week — and still drift in tone, format, and quality between runs.
The fix
Stop typing the same prompt twice. Package it as a Skill — slug, scope, and shape — once.
The lineage

From function calling to your knowledge base.

AI Skills sit on the shoulders of three years of agent infrastructure — culminating in the Agent Skills open standard, which Knovya extends with knowledge binding.

  1. 2023
    OpenAI — Function Calling Structured outputs from a language model. The first time an LLM could reliably emit JSON to invoke a tool — turning prose into infrastructure. OpenAI · API release
  2. 2023
    OpenAI — GPTs Reusable AI personas with custom instructions, knowledge files, and tools. Proved that "save my prompt" was a feature, not a script. OpenAI · DevDay
  3. 2024
    Anthropic — Tool Use & Computer Use Claude operating an environment, not just answering. Tool definitions became composable; agents started running for hours, not turns. Anthropic · API release
  4. Oct 2025
    Anthropic — Agent Skills (open standard) Markdown folders with progressive disclosure, published as an open standard at agentskills.io. Skills work across Claude, Cursor, Gemini CLI, Codex — model-agnostic, portable. Anthropic · open standard
  5. 2026
    Knovya AI Skills Agent Skills bound to your knowledge base. Slug, scope, shape — but the input is your notes, the output lands in your graph, and the run is callable from MCP. Where Skills meet memory. ★ Knovya · production
First of its kind

Nobody runs Skills on your notes.

The Agent Skills standard is real, open, and growing. What it does not have — yet — is a knowledge base to read from. Anthropic skills run inside Claude. Cursor skills run inside a repo. Knovya skills run inside your second brain. Same standard, different binding.

  • Anthropic Agent Skills Skills + Claude · code execution
  • OpenAI GPTs custom instructions · chat sandbox
  • Cursor / Continue Skills Skills + repo · developer focus
  • Zapier / n8n / Gumloop workflows + actions · no knowledge
  • Notion AI templates fixed templates · proprietary
  • ★ Knovya AI Skills Skills + your notes · open-standard compatible
Surfaces

One skill, four places to run it.

A skill defined once is callable from every surface where you already write — including from Claude or Cursor through MCP.

Slash menu in editor

Type / in any note. Skills appear inline alongside transforms and templates — searchable by slug, ranked by recent use.

AI Drawer pill co-edit

Open the Drawer. Pick a skill from the picker. The skill loads as a pill in the input — alongside Research and other modifiers. Removable, swappable.

MCP catalog agent-callable

Custom skills register as MCP tools automatically. Claude or Cursor sees them next to knovya_search. Your skill becomes a callable function — anywhere MCP runs.

Run history audit log

Every skill run is logged with input scope, output destination, model used, and outcome. Reproducible — old runs stay anchored to the skill version that produced them.

Frequently asked

A few honest answers.

What is an AI workflow?
An AI workflow is a reusable sequence of AI actions that runs against a defined input and produces a structured output. In Knovya, an AI Skill is a workflow that reads notes from your knowledge base, runs a custom prompt with a specific model, and writes the output as a new note, an edit to an existing note, or an appended section. Skills are composable — they can chain transforms, call MCP tools, and trigger other skills.
How is an AI Skill different from a prompt?
A prompt is one-time text. A Skill packages the prompt with three things you cannot get from copying text into ChatGPT: scoped inputs that name which notes the AI reads, structured output hints that shape the response into headings and blocks, and a slug you can call again next week without rewriting anything. Skills are versioned, shareable, and callable from MCP — your prompt becomes infrastructure.
Are Knovya AI Skills compatible with Anthropic Agent Skills?
Yes. Knovya skills follow the Agent Skills open standard published in October 2025 — SKILL.md frontmatter, progressive disclosure, model-agnostic. The difference: Anthropic skills run inside Claude with code execution. Knovya skills run inside your knowledge base, with your notes as the primary context. Both standards interoperate through MCP.
How do I build my own AI Skill?
Open Settings → Skills → New. Give it a slug, a description, a system prompt, and optional output hints (heading structure, format, tone). Choose what notes the skill reads — current note, folder, tag, or workspace. Choose where the output lands. Save. Run it from the slash menu or from any AI Drawer. Editing a skill versions it; old runs stay reproducible.
Are AI Skills free?
All built-in skills are free for everyone — meeting summaries, decision logs, PRD generators, action item extractors, and more. Custom skills require Pro (10 custom skills) or Team (50 custom skills per workspace). Each skill run consumes AI credits per your plan.
Can I share my custom AI Skills with my team?
Yes, on the Team plan. Custom skills can be scoped to a workspace, where any member can run them. Skill outputs land in shared folders. A future release will add public skill sharing through the Knovya skill directory, similar to the Anthropic partner directory.
How is this different from Zapier or n8n?
Zapier and n8n connect apps to apps — they are action-bound. AI Skills are knowledge-bound. They read your notes, reason over them, and write structured knowledge back. There is no API authentication step, no webhook setup, no connector library. The skill operates inside the same knowledge graph that holds your work, which means context is free and the output stays linked to its source.

Run your first skill in 90 seconds.

Fifty built-in skills work on day one — every plan, no setup. Custom skills on Pro. Workspace-shared skills on Team.

element 05 · Group I — AI