Knovya Use Cases Customer Research
Use Case · Problem 10 Customer Research
Chapter II · When the team forgets
Twelve customer interviews this quarter. Three people watched the recordings. The transcripts are in Drive, named by date. The themes only exist in the head of whoever ran the interview — and six weeks later the PRD goes out, written from memory.

Customer research that turns into product, not a folder of unread MP4s.

Recording is solved. Listening is the work. The expensive part of an interview isn't the hour you spent — it's the conversion from conversation to theme, theme to persona, persona to product decision. We built the part most teams skip: an archive where the customer's voice is already organized into the patterns the team can act on.

4 moves Capture · Cluster · Distill · Ship
One archive Interviews → personas → PRD, connected
Inside Cursor The right quote surfaces while you're writing the spec
§ 02 · The diagnosis

Recordings stack up. Listening doesn't.

What's actually wrong

Customer interviews are an expensive way to learn things and a cheap way to lose them. A 45-minute conversation costs scheduling, prep, the call itself, the awkward silence at minute thirty-eight where a real answer finally lands — and then it gets uploaded to Drive as a 230 MB MP4 and a transcript nobody opens.

The work that actually matters — reading the corpus, tagging passages, clustering themes, drafting the persona, finding the quote that justifies a roadmap call — is the work that gets skipped. Not because the team is lazy. Because nobody owns it, and the tools force you to choose between recording in Zoom, transcribing in Otter, tagging in Dovetail, and writing the PRD in Notion. Four surfaces, three logins, zero connections between them.

Six weeks later, when somebody on the team writes the spec, the customer voice in that spec is whoever-ran-the-interview's memory of the customer voice. Which is not the same thing.

What we built instead

Every recording lands in the archive with the transcript attached, speakers labelled, timestamps intact. AI tags themes the moment a third or fourth interview confirms a pattern — price sensitivity, onboarding drop, integration gap — and the quotes that justify each tag are one click away.

Personas draft themselves from clustered quotes. Jobs-to-be-Done statements emerge from the patterns the corpus is already showing you. The research isn't a deliverable you produce after the round; it's a layer that grows during the round and stays linked to every primary source it came from.

Then — the part most research tools stop short of — when the PM opens Cursor and starts writing the PRD, the relevant interview quotes surface inside the editor. Not as a dashboard the PM has to remember to check. As a citation that arrives when the section is being written.

The interview is the cheap part. Acting on what it said is the product.

§ 03 · The lab

Watch a recording become a roadmap call.

Three moments from a typical research round. Pick one — the archive lights up the part of itself it would actually use. No live data, no signup; the moves are real, the quotes are illustrative.

  1. Move 01 Capture

    Forty-five minutes in. The transcript is already written; you're still listening, not typing.

    Vn Voice Notes Cv Conversation→Note
  2. Move 02 Cluster

    The session lands tagged against the rolling theme list — 'price-clarity' is now in three calls.

    Tr AI Transforms Bl Backlinks
  3. Move 03 Distill

    Two prior interviews where someone said the same thing surface beside the new one. The pattern declares itself.

    Hs Hybrid Search Ee Experience Envelope
  4. Move 04 Ship

    The quote is already linkable from anywhere on the team — the PM doesn't need to ask you for it.

    Sn Share Notes

Forte called the four moves CODE for personal knowledge. We renamed the same shape for the work researchers actually do.

§ 04 · The components

Twelve features, four moves.

Knovya's twenty-six elements aren't all for research — but twelve of them carry this work cleanly. Here's which features map to which move, and which pieces are doing the heaviest lifting for an interview-driven workflow. The full table lives at /features.

C Capture

Get the customer's words out of the call and into a place that can be searched, tagged, and read by the rest of the team.

07 Vn
Voice Notes

Record the call, get a speaker-aware transcript with timestamps.

09 Cv
Conversation→Note

Paste a Zoom transcript or chat log; it lands as a structured note.

06 Wr
Web Research

Background reading, competitor pages, public posts — clipped with sources intact.

C Cluster

The corpus arranges itself — tags, themes, semantic neighbors, ranked by what's repeating across calls.

04 Tr
AI Transforms

Tag passages by theme; cluster quotes; pull JTBD statements.

13 Kg
Knowledge Graph

A view of which themes connect across which interviews.

15 Bl
Backlinks

Every quote, theme, and persona linked back to the source it came from.

D Distill

Themes become personas. Personas become JTBD statements. The right quote rises before the question is finished.

14 Hs
Hybrid Search

Search the corpus by keyword and meaning at once.

12 Ee
Experience Envelope

Past interviews surface alongside the new one — the precedent does the talking.

02 Am
AI Memory

Forgotten quotes return when their topic comes up in a new draft.

S Ship

The corpus is the start of the next document — PRD, persona doc, share-out — exposed to every editor and AI you already use.

22 Sn
Share Notes

Publish the persona, share the quote bank, keep the raw transcripts private.

01 Mc
MCP

Cursor, Claude, ChatGPT read the research archive while you write the spec.

21 Tp
Templates

Persona, JTBD, interview-guide, debrief — all linkable, all editable.

§ 05 · The lineage

Listening to the customer, seventy years of method.

Customer research didn't start with software. The methods we use — field notes, usability testing, jobs-to-be-done, modern repositories — arrived in waves, each solving the bottleneck the last left behind. The current bottleneck is synthesis.

  1. 1959 Erving Goffman

    Field notes — the ethnographer's notebook

    Goffman's Presentation of Self in Everyday Life codifies a discipline sociology had been practicing for decades: write down what people actually did, not what they said they did. The notebook is the instrument. Every customer interview since is a descendant of the field note.

  2. 2000 Steve Krug

    Don't Make Me Think — usability becomes a verb

    Krug's book makes guerrilla usability testing legible to non-researchers. Five users in a hallway, a stopwatch, and a notebook produce more honest signal than a twelve-page survey. The method outlasts the era it was written in — teams still use it today.

  3. 2003 Clayton Christensen

    Jobs-to-be-Done — ask what they hired the product for

    Christensen and Anthony reframe the interview question itself. Not what features do you want but what job did you hire this product to do? The Milkshake study becomes the most-cited illustration in product strategy for two decades. The question shape changes; the listening discipline doesn't.

  4. 2017 Dovetail

    The research repository, productized

    Dovetail (and EnjoyHQ alongside it) make the spreadsheet-of-tags into a product. Tag a transcript, cluster the tags, share with the team, search across rounds. The repository as a category is born. Notably, Aurelius, and a dozen others follow within five years.

  5. 2023–25 The AI synthesis turn

    Themes that surface themselves

    Marvin, the AI features inside Dovetail and Notably, NotebookLM — embeddings finally cheap enough that theme extraction can happen across a corpus, not in a spreadsheet column. Tagging falls to the machine. The discipline that survives is asking the right question and reading the transcript at least once.

  6. 2026 Knovya

    A research archive the PRD reads from

    We built the part the previous generation stopped short of: the archive doesn't just hold interviews; it surfaces them inside the document that's about to act on them. Through MCP, when a PM opens Cursor and starts writing the spec, the relevant quote arrives where the citation goes. The research stops being a deliverable. It becomes load-bearing.

§ 06 · The bets

Five research tools. Five different bets.

Most tools in this category are wagering on a step — the recording, the tagging, the repository, the AI synthesis. The honest comparison isn't features; it's which step each tool decided to be best at, and which one it leaves to you (or your other twelve tabs).

App The bet The piece they leave to you
Dovetail Research repository, team-led

The bet The repository as a category. Tag a transcript, cluster the tags, share with the team, search across rounds. The reference point for what a research tool ought to feel like.

What's left to you The hand-off into product. The repo is beautifully self-contained — insights live there, the PRD lives elsewhere, and the citation back is whatever you remember to paste.

Notably Collaborative tagging

The bet Real-time, shared tagging. A research team highlights and codes the same transcript together, watches the themes form on a board, runs lightweight synthesis inside the same surface.

What's left to you The downstream document. The board is where the work feels alive; the persona doc and the PRD that act on it still live somewhere else.

Aurelius Atomic insight cards

The bet The insight as the unit. One observation per card, evidenced by quotes, searchable by theme. The Zettelkasten of customer research.

What's left to you The conversation between insights and decisions. Cards stay in the research tool; the spec that needs them stays in Notion or Linear.

Notion Database with pages

The bet One workspace for everything. Interview notes, personas, PRDs, and roadmap all on the same page tree. If you're disciplined enough, the citation never has to leave the system.

What's left to you The synthesis. Notion holds the words; tagging, theming, persona-drafting are on you. Search is keyword-only — the right quote doesn't come find you.

Knovya Research that the PRD reads from

The bet The corpus and the spec are the same archive. Tag and theme like a research tool; link and surface like a knowledge base; expose to Cursor and Claude through MCP so the quote appears where the spec gets written.

What's left to you Asking the right question. The synthesis layer is on the system. Listening — the part research humans are actually good at — is the only piece left for you.

Other apps pick a step to be best at. We picked the connection from research to product.

§ 07 · Surfaces

Where research actually happens.

A research tool earns its place at the four surfaces a researcher already lives in: the call, the synthesis canvas, the persona doc, and — the one most tools forget — the editor where the team writes the spec.

Surface 01 · Phone

The interview, transcribed live.

Speaker-aware transcript while the call is still happening — you stay present, the archive captures verbatim.

Surface 02 · Desktop

Themes, repeating themselves.

Twelve interviews, four themes, the dense regions form on their own — price-clarity, onboarding, integration gap, jobs-to-be-done.

Surface 03 · Browser

Personas drafted from the corpus, not from memory.

Each claim in the persona is one click from the interview quote that justifies it. Hover any sentence; see the source.

Surface 04 · Cursor / Claude / ChatGPT

The customer voice, inside the spec.

Through MCP, the editor reads from the research archive while you write. The citation arrives where the citation goes — no copy-paste, no tab-switch.

§ 08 · Bonded with

How this connects to the rest of the workflow.

Customer research isn't a stop — it's an upstream. The corpus feeds the PRD, which feeds the meeting, which feeds the decision log. Here's the constellation around this page.

§ 09 · Pick a recording

Pick a recording. Start there.

A research archive isn't built in one round. It's built one transcribed call, one tagged theme, one persona drafted from quotes at a time. The archive starts paying back the moment the first interview lands in it.

Or scroll back to the diagnosis.

§ 09b · The questions

The things teams ask before they switch.

Eight questions we keep getting from PMs, UX researchers, and founders. If yours isn't here, the contact page reaches us directly.

  1. Q · 01 What is a user interview?

    A user interview is a structured conversation with someone in your target audience — usually 30 to 60 minutes — meant to surface the problems, motivations, and language they use around a product or task. It's the foundational method of qualitative customer research, used by product managers, UX researchers, and founders to figure out what to build before they build it.

  2. Q · 02 What is user research?

    User research is the practice of learning what your users actually do, want, and struggle with — through interviews, usability tests, surveys, and observation — and turning that into product decisions. It pre-dates the software industry; the digital version traces back to Steve Krug's usability work in 2000 and Clayton Christensen's Jobs-to-be-Done framing in 2003.

  3. Q · 03 How do you take notes during a user interview?

    The honest answer: you don't. Trying to type or write while interviewing means you stop listening. The current best practice is to record (with consent), let an AI transcribe in the background, and stay fully present with the person. Any quick observation you do write down should be a tag or a question for follow-up — not a verbatim quote.

  4. Q · 04 How do you organize interview notes?

    Three layers. Raw transcript with speaker labels and timestamps. Quote bank tagged by theme. Synthesis layer — themes, tensions, personas, JTBD statements — that links back to the quotes that justify it. Each layer should be searchable and connected, so when a PRD references "price sensitivity," the original interview where someone said it is one click away.

  5. Q · 05 How do you analyze interview notes?

    Read every transcript at least once before coding. Tag passages by what's actually being said, not by what you wanted to hear. Cluster the tags into themes only after the corpus is wide enough that themes start repeating themselves. Treat outliers as a question, not noise. AI can speed the tagging and clustering — it shouldn't replace the read-through.

  6. Q · 06 What's the difference between a customer interview and a user interview?

    Often nothing — many teams use them interchangeably. The technical distinction: a customer interview talks to someone who pays for the product (whose problem you solve in exchange for money); a user interview talks to someone who uses it (whose problem you solve, whether or not they hold the wallet). In B2B these are sometimes different humans; in B2C they usually aren't.

  7. Q · 07 How does AI fit into customer research in 2026?

    The bottleneck has moved from capture to synthesis. Recording and transcription are solved — every video call platform does it. The work AI does well is theme extraction across a corpus of twelve or fifty interviews, persona drafting from clustered quotes, and surfacing the right quote when someone is writing a PRD. AI replaces the spreadsheet step, not the listening step.

  8. Q · 08 Can I use Knovya as a research repository for free?

    Yes. Knovya Free includes 50 notes, 50 AI credits per month, and 50 MCP calls per month — enough to keep the transcripts, themes, and personas from a small research round (roughly 8–12 interviews) and see the synthesis flow before paying anything. Pro and Team add E2E encryption and higher limits for ongoing research programs.