Issue · 02 · Spring Knovya · Letters
Letter · 01

AI tools for researchers that synthesize, not search.

— Spring, the year of more papers than you can read

Built for the researcher with fourteen open tabs, a folder called "research", and a quote they swore they'd come back to. One letter, in three parts: where the threads live, where they should, and what the next study finally keeps.

The Letter

Dear Researcher, on AI tools for researchers

We've watched you lose threads. The PDF you starred in February. The quote you swore you'd come back to. The participant whose phrasing was almost exactly the answer — all of them, somewhere in fourteen tabs and a folder called "research". The thread isn't gone; it's just out of reach.

The problem isn't note-taking. You take notes. You highlight, annotate, transcribe, transcribe again. The problem is that every new study starts you from zero. Last quarter's insight doesn't show up when you need it. Recognition fails where recall fails first — you'd know the answer if you saw it, but you can't quite remember writing it.

And the new generation of AI tools — Elicit, NotebookLM, Consensus, ResearchRabbit — solves a different piece. They read the literature. They surface papers, summarize PDFs, answer questions from peer-reviewed sources. That work is legitimate, and the tools are good. But the literature isn't the only archive in your life. Your own notes, your own synthesis, your own transcripts — that archive is also yours to keep, and it's the one most likely to slip away.

Knovya keeps the thread alive. NoteRank surfaces the precedent you forgot you wrote. Experience Envelope groups your past studies by what worked, what didn't, what's still open. Web Research reads twelve papers in the time it takes you to refill coffee, then drops a structured note with citations on your desk.

And the graph compounds. Every quote you save makes every future search easier. Knowledge Graph connects the participant phrasing to the paper that named the pattern, to your own draft from three months ago, to the open question you sketched in the margin and almost forgot. Hybrid search finds it whether you remember the exact word or only the shape of the idea.

Stop losing threads. The thread is already in your archive — Knovya just hands it back.

— Knovya

The Recall · Try it

A moment of recognition, six months later.

This is what comes back when you're sure you've seen it before. Three lenses on the same archive — the precedent, the synthesis, the pattern.

search · "interview · onboarding friction"
— 3 hits · NoteRank surfacing precedent
NoteRank · 6-month archive
  1. P3 — felt unsure where to click first

    Direct quote from study 03 onboarding interview. Tagged as a recurring pattern; linked to two adjacent studies.

    Mar 2026 ✓ pattern ★ envelope
  2. Onboarding heuristics — Nielsen revisit

    Your annotation on a 1994 paper, re-read in February. Useful framing for what P3 was describing.

    Feb 2026 · your annotation
  3. P7 — "I didn't even see the button"

    Study 02 transcript. Same shape as P3, three months earlier. The pattern was there before you named it.

    Jan 2026 · study 02 · interview 7

The exact quote, two adjacent studies, the pattern you almost named twice.

Three lenses, one archive. The hit is the precedent you almost forgot. The synthesis is what twelve papers argued. The pattern is what your past studies already taught.

The Stack — six things, one research workflow

From the highlighted PDF to the discussion section, in one archive.

Recall, synthesis, pattern recognition, and the AI that reads with you — built for the work between studies.

  1. 01

    NoteRank

    Ten signals rank your archive — graph density, your own engagement, what you've marked as a precedent, the time since you last touched it. The participant phrasing from six months ago surfaces before you finish typing the question.

    NoteRank →
  2. 02

    Web Research

    Point Knovya at twelve papers, an article, a documentation page. It reads them, drops a structured note with citations on your desk, and links the source URLs back to anchors in the prose. Capture stops being a tab-switching tax.

    Web Research →
  3. 03

    Experience Envelope

    Past studies grouped by outcome — what shipped, what surprised, what's still open. When a question shaped like an old one returns in a new study, the envelope hands back the precedent. The retro happens before the next interview.

    Experience Envelope →
  4. 04

    Knowledge Graph

    The quote links to the paper links to your annotation links to the open question. Every reading you save makes every future search richer. The archive compounds rather than decays.

    Knowledge Graph →
  5. 05

    Hybrid Search

    Full-text search finds the exact word. Vector search finds the shape of the idea. Reciprocal Rank Fusion blends both into one ordered list. Whether you remember the keyword or only the resonance, the right note comes back.

    Hybrid Search →
  6. 06

    AI Co-Edit

    A side panel that drafts synthesis with citations linked to the source notes that informed each claim. The discussion section starts with what your archive already said — not a blank prompt and a wishful citation list.

    AI Co-Edit →
A Week, in Practice

Study 04, somewhere between protocol and write-up.

Three studies behind, one in flight, the literature open in another tab. Here's the week, in seven scenes.

  1. Mon · 09:30

    Protocol drafting

    You're writing the protocol for study 04. Search Knovya for "recognition over recall". Three hits: the P3 quote, the Nielsen annotation, the open thread from study 01. The protocol starts where the last study stopped.

  2. Tue · 11:00

    Literature, captured

    You found three papers worth reading. Web Research takes the URLs. Twenty minutes later, three structured notes with citations land in the project folder — quotes anchored, claims captured, ready to annotate.

  3. Wed · 14:00

    Interview · session one

    You record the interview and let voice transcription run. The transcript lands with named quotes. AI Co-Edit tags two passages that match the recognition-over-recall theme — the same shape, a different participant.

  4. Thu · 10:00

    Reading, annotated

    A new paper from a colleague. You read in your usual tool, but the annotations come back to Knovya — a quote, two margin notes, a thought you'd have forgotten by next week. The graph adds three edges.

  5. Fri · 15:00

    Synthesis draft

    AI Co-Edit drafts the discussion section. Three themes from twelve papers, citations linked. You edit, accept, move on. The first draft is forty minutes, not four hours — and every claim points to a note you can verify.

  6. Sat · 12:00

    A side question

    You wonder about a participant from study 02 who mentioned trust calibration. NoteRank surfaces the quote. You add it to the open thread. The aside becomes a future paper.

  7. Sun · 21:30

    Weekly review

    One paragraph: what you read, what surprised you, what's still open. Reflect & Crystals files it with thirteen other weekly reviews. Three months from now, you'll search "trust calibration" and see the whole arc.

None of this is theoretical. It's a study cycle. The literature kept moving. The thread stayed.

The Blind Spot — what the AI research stack misses

The literature isn't your only archive.

The new generation of AI research tools is genuinely good at what it was built for. Elicit reads the literature with a systematic-review discipline that wasn't possible three years ago. NotebookLM holds your uploaded PDFs without hallucinating. Consensus answers questions with citations to peer-reviewed sources. ResearchRabbit maps the citation network better than any visual tool before it. None of this is a complaint.

The blind spot is the archive these tools don't keep. Your own notes — the quotes you starred, the participant phrasing you tagged, the synthesis you drafted at 2 a.m. — live elsewhere. Maybe in a folder of Markdown, maybe in Notion, maybe in a Word doc someone named "final-final-3". Six months later, they don't surface. Recognition fails where recall fails first.

And the archive is where the actual research lives. The literature is the input; your synthesis is the output. The pattern you named, the methodology you adapted, the open question you sketched on a Friday and forgot by Monday — these are the artifacts that compound across studies. A research practice without a working archive is a practice that learns from the literature but not from itself.

Knovya is built for the second archive. NoteRank surfaces the precedent. Web Research adds new sources without losing structure. Experience Envelope groups your past studies by outcome. Knowledge Graph connects every quote to every annotation to every open question. Elicit and NotebookLM stay in your toolkit; Knovya is where what you wrote about them lives.

The literature is what others have said. Your archive is what you've heard.

The Plan — for researchers, specifically

Three ways in. Pick the one your study cycle needs.

Solo researchers run Pro. Research labs run Team. Free is enough to bring one project's notes into Knovya and see if the recall holds.

Free

$0 forever

Run one project through Knovya. Bring a study's notes, capture three papers, see if the precedent surfaces next week.

  • Up to fifty notes — one project's reading and synthesis
  • NoteRank + Hybrid Search across your own archive
  • Web Research, AI Co-Edit — limited monthly credits
  • One public link, to share a draft synthesis with a colleague
  • Templates for protocol, interview note, literature review
Open the workspace
For solo researchers

Pro

$15 per month

Built for the researcher running their own archive across studies. Unlimited notes, the full memory layer, the AI that drafts with citations.

  • Unlimited notes — every study, every quote, every annotation
  • Full NoteRank + Experience Envelope + Knowledge Graph
  • Full Web Research, AI Co-Edit, voice transcription — credits scaled to research work
  • End-to-end encryption — unpublished work and IRB-sensitive data stay private
  • Unlimited public links — share drafts and synthesis docs with collaborators
  • MCP for Claude, ChatGPT, NotebookLM-adjacent workflows
  • Markdown export with citations for Pandoc and Zotero pipelines
Start with Pro
For research labs

Team

$25 per seat / month

For PIs and labs running shared bibliographies, joint annotation, and lab-wide knowledge graphs across PhD students and postdocs.

  • Everything in Pro, for the whole lab
  • Real-time co-editing on synthesis docs and protocols
  • Shared folders by project with role-based permissions
  • Workspace-level Knowledge Graph — lab-wide reading and pattern memory
  • Lab templates and shared annotation conventions
  • SAML SSO and audit log (enterprise add-on)
Start with Team

Student or institutional researcher? Education and lab discounts are available. Talk to us →

Try Knovya for the next study.

Bring last study's notes, capture three new papers, draft one synthesis. Free is enough to see if the precedent surfaces next week.

Questions, answered

What researchers usually ask first.

  1. What are the best AI tools for researchers?

    The best AI tools for researchers split by what you're doing. For paper discovery, Elicit, ResearchRabbit, and Semantic Scholar lead the category. For source-grounded synthesis from PDFs you've uploaded, NotebookLM is the strongest free option. For evidence-based question answering with citations, Consensus excels. None of them solve the next problem: where your own notes, quotes, and synthesis live across six months of reading. Knovya is built for that layer — the archive that surfaces past insight when a new study echoes an old one.

  2. Is Knovya an alternative to Elicit or NotebookLM?

    Knovya solves a different problem. Elicit and NotebookLM read the literature — papers you find or upload. Knovya holds what you wrote about them: the quote you starred, the participant phrasing you swore you'd come back to, the synthesis you drafted at 2 a.m. and forgot about. Most working researchers run both: a discovery tool for the literature, and Knovya as the long-term archive that doesn't lose threads. The two are complementary, not competing.

  3. How does NoteRank help with literature reviews?

    NoteRank ranks your archive by relevance to a query, blending graph density, the recency of your engagement, and what you've marked as important. When you start a new study and search 'onboarding friction' or 'token rotation,' the precedent — a participant quote, a paper you annotated, an old draft — surfaces before you finish typing. The retro on your last study happens in the search bar.

  4. Can Knovya read papers from the web?

    Web Research lets Knovya read content you point it at — papers, articles, blog posts, documentation — and drop a structured note on your desk with citations. The note links the source URL, captures the key claims, and anchors quotes you can pull into a longer synthesis. It's not a replacement for Elicit's systematic literature search; it's the capture layer for sources you've already chosen to read.

  5. Does Knovya support citations and references?

    Notes can carry citations — author, year, source URL, page anchor, quote — as structured metadata. AI Co-Edit drafts synthesis with citations linked back to the source notes that informed the claim. Markdown export carries citations with the prose, so the synthesis flows into Zotero, Pandoc, or whatever your manuscript pipeline already uses.

  6. How is Experience Envelope useful for researchers?

    Experience Envelope groups your past studies and notes by outcome — what worked, what surprised, what's still open. When a question shaped like an old one returns in a new study, the envelope surfaces the precedent: the methodology that landed, the failure mode that recurred, the open thread you noted six months ago. The thread is already in your archive; the envelope hands it back.

  7. Can a research lab use Knovya for shared annotation?

    Yes. The Team plan supports shared workspaces with role-based folders, real-time co-editing on synthesis docs, and a workspace-level Knowledge Graph where the lab's collective reading lives in one place. PIs, postdocs, and PhD students can annotate the same paper, link quotes across studies, and search across everyone's notes. The lab's reading stops being scattered.