Skip to content
knowledge-managementai-workflowreverse-promptingpractitionerteam

Let the AI Interview You: How to Extract What You Actually Know

The biggest bottleneck in making AI useful isn't technology — it's getting your knowledge out of your head and into a format AI can work with. Here's a method that works, from a five-minute technique to a full team knowledge extraction process.

Fabian Mösli Fabian Mösli
· 13 min read · 2026-03-17

There’s a problem that almost nobody talks about in the AI space.

Everyone focuses on prompts. How to write better prompts. How to structure better prompts. Prompt libraries, prompt courses, prompt certifications. But the real bottleneck isn’t how you talk to AI — it’s that you don’t know what to tell it.

I don’t mean that in a condescending way. I mean it literally. The most valuable knowledge in any professional’s head is tacit — things you know so well that you’ve forgotten you know them. The instinct that tells a senior sales person which leads are worth pursuing. The gut feeling that tells a product manager which feature request is actually a symptom of a deeper problem. The mental model that tells an operations lead which process needs fixing before it breaks.

That knowledge is what makes AI genuinely useful when you feed it in. But how do you extract something you can’t articulate?

You flip the conversation. Instead of you asking the AI questions — you let the AI ask you.

The “Ask me 5 questions” technique

This is the simplest version, and you can try it right now. Open Claude or whatever AI tool you use, and type:

“I want to [write a positioning paper / build a strategy / create a presentation] about [your topic]. But before you start: ask me 5 targeted questions that will help you produce something much more precise and useful.”

That’s it. One prompt. What happens next is surprisingly effective.

The AI comes back with five questions. They’re usually good ones — the kind that force you to articulate things you hadn’t thought to mention. Things like:

  • “Who specifically is the audience for this, and what’s their current level of understanding?”
  • “What’s the one thing you want them to remember a week later?”
  • “What’s the most common misconception about this topic that you want to correct?”
  • “What constraints am I working with — length, tone, format?”
  • “What have you already tried that didn’t work?”

Answer the questions. Takes two minutes. And then — here’s the part that surprises people — the output quality jumps dramatically. Not because the AI got smarter, but because you closed the knowledge gap between what you know and what the AI knows about your situation.

This is what I mean when I talk about the brilliant new hire metaphor. The AI has immense general capability but zero knowledge of your specific context. These five questions are the briefing that turns a generic answer into a useful one.

The self-critique follow-up

Once the AI produces a first draft based on your answers, add this:

“Now critique your own answer from the perspective of a skeptical [investor / customer / colleague]. Where is the reasoning weak? What would they challenge?”

The AI will identify gaps in its own output — and by extension, gaps in the briefing you gave it. This creates a natural loop: the AI produces, critiques, you refine, it produces again. Each cycle gets better because the context gets richer.

I’ve found this technique alone lifts output quality by roughly half compared to just throwing a task at the AI cold. And it takes maybe five extra minutes.

Scaling up: the AI interview

The “5 questions” technique works for individual tasks. But what if you want to do something bigger — like build a knowledge base that AI can draw from for months?

That’s where the full AI interview comes in. Instead of 5 questions about a specific task, you let the AI interview you deeply about a whole domain. Your role. Your company. Your market. Your decision-making process.

Here’s how I did it when building our company’s AI Operating System:

I sat down with Claude and said:

“I want you to build a comprehensive understanding of my role, my company, and how we operate. Interview me. Ask me questions one at a time. Start broad, then go deeper based on my answers. Don’t stop until you feel you have a thorough picture.”

Then I spent a few hours — spread across several sessions — answering questions. Dozens of them. The AI would ask about our product, I’d answer, it would follow up with something more specific. It would ask about our market, I’d give an overview, it would probe into the competitive dynamics. It asked about decisions I’d made, and then asked why I made them that way instead of another way.

The result was remarkable. The AI pulled out things I hadn’t thought to document. Assumptions I was making without realizing it. Connections between decisions that I’d never explicitly stated. Tacit knowledge that I’d been carrying around for months — or years — that had never been written down.

After each session, I had Claude structure everything into organized Markdown files. Product knowledge here. Market analysis there. Strategic decisions in their own folder. Each file tagged with a confidence level: “verified,” “working assumption,” or “needs validation.”

That collection of files became the foundation of our AI Operating System. And because the knowledge was extracted through conversation rather than written from scratch, it captured nuances that a typical documentation effort would have missed.

Taking it to the team

The individual interview is powerful. The team version is transformative.

Here’s what we did at Carewell. After I’d built the initial knowledge base from my own interviews, I generated customized questionnaires for each member of the leadership team.

Not generic survey questions. Targeted, domain-specific questions based on each person’s role and what the AI already knew (and didn’t know) about the company. Our CEO got questions about strategic vision, risk tolerance, and competitive positioning. Our sales lead got questions about client relationships, objection patterns, and market feedback. Our operations lead got questions about process bottlenecks, automation opportunities, and scaling challenges.

The questions were designed to be uncomfortable — in a productive way. “What’s the number one existential risk to the company that nobody talks about?” “Where do you think we’re fooling ourselves?” “What should our product explicitly NOT do?” These aren’t the kind of questions you get in a standard company survey.

Our CEO answered 35 questions. He told me afterward that several of them made him think about things he hadn’t articulated before — assumptions about the business that he’d been operating on without ever writing them down or sharing them with the team.

Everyone on the leadership team did the same exercise. The whole process took about two weeks, with people answering at their own pace.

The goldmine: divergence detection

Here’s where it gets genuinely interesting.

Once everyone had answered their questionnaires, I fed all the responses into the AI Operating System and asked a simple question:

“Compare the leadership team’s answers. Where do people disagree? Where are the fundamental assumptions different?”

The AI came back with six divergences. Six places where leadership team members had fundamentally different views about the company’s direction, strategy, or priorities. Not small differences in phrasing — real, substantive disagreements that had practical implications.

One example: our CEO believed a particular product feature should become the primary interface for customers. I believed it should complement the existing interface, not replace it. Both of us had been operating on our respective assumptions for months. Neither of us had explicitly stated our position to the other.

The AI didn’t just flag the disagreement. It structured it: here’s Position A with the reasoning behind it, here’s Position B with its reasoning, here’s the practical impact of the disagreement on our product roadmap, and here’s a suggested resolution path.

We scheduled a discussion. Within an hour, we’d aligned on an approach — test both with a small user group and let data decide. A disagreement that might have simmered unspoken for months got surfaced and resolved in a structured way.

Most companies have dozens of these hidden divergences. They slow everything down because people work from different assumptions without realizing it. The AI interview process surfaces them systematically.

The four phases

If you want to do this for your company, here’s the sequence that worked for us:

Phase 1: Deep research

Before you start interviewing anyone, build an external knowledge base. Use Perplexity or similar research tools to map your market comprehensively. Competitors, regulations, industry trends, potential client segments. All stored as structured Markdown files.

This matters because the AI needs context to ask good questions. If it already understands your market, the interview questions will be much more pointed and useful.

Phase 2: AI interviews with the founder or leader

Start with whoever holds the most cross-functional knowledge — usually a founder, CEO, or senior leader. Let the AI interview them across all domains: product, strategy, market, operations, vision. Multiple sessions, dozens of questions.

Structure the output into your knowledge base after each session.

Phase 3: Team questionnaires

Generate customized questionnaires for each leadership team member. The AI can do this based on what it learned in Phase 2 — it knows what gaps remain, what assumptions need validation, what domains need a second perspective.

Give people time to answer thoughtfully. A week is usually enough. Make it clear this isn’t a test — it’s knowledge extraction.

Phase 4: Divergence detection and alignment

Feed all responses into the system. Ask the AI to find contradictions, disagreements, and fundamentally different assumptions across the team. Structure each divergence as a record with both positions clearly stated, the practical impact, and a suggested resolution path.

Then schedule focused discussions for each divergence. Not a three-hour all-hands meeting — short, structured conversations about specific disagreements. Data over debate.

The 80/20 of building an AI system

If you’ve read my guide on building a Company AI Operating System, you know the architecture: folders, Markdown files, instruction files, behavioral rules. That structure matters, but it’s maybe 20% of the effort.

The other 80% is the knowledge itself. And the process I’ve described here — research, interviews, questionnaires, divergence detection — is how you actually fill the system with knowledge worth having.

Most companies that attempt an AI knowledge initiative fail at this step. They build a beautiful folder structure and then leave it empty, or fill it with surface-level documentation that any AI could have generated from public information. The value comes from the knowledge that’s in people’s heads — the tacit knowledge, the unwritten assumptions, the hidden disagreements.

One caveat: when you’re interviewing yourself, watch for the AI validating your answers rather than probing them. LLMs have a sycophancy problem — they’ll tell you your perspective is insightful when they should be asking “are you sure about that?” Instruct the AI to challenge your assumptions during the interview, not just record them.

Getting that knowledge out requires a specific method. Asking people to “just document what they know” doesn’t work — they don’t know what they know, and they don’t know what’s missing. Letting the AI drive the extraction process solves both problems.

Start small

You don’t need to run a four-phase knowledge extraction initiative to get value from this approach. Start with the “5 questions” technique on your next project. Then try a longer interview session — ask the AI to interview you about your role for 30 minutes. See what it pulls out.

If the results surprise you — and they will — then you’ll understand why this approach scales. The technique is the same at every level: instead of you struggling to articulate what the AI needs to know, let the AI figure out what to ask.

It’s a better briefing process. And better briefings lead to better outputs. Every time.

Published: 2026-03-17

Last updated: 2026-03-17

Stay in the loop

Don't miss what's next

I'm curating the best AI tools for professionals. Join the list and I'll reach out when I have something worth sharing.