Your AI Has No Idea What Your Company Does
AI tools are insanely capable — and completely clueless about your business. That gap is the root of most people's frustration. Here's how to think about it differently.
Fabian Mösli Here’s something I hear all the time: “I’ve tried ChatGPT. It’s okay, I guess. But it’s definitely not worth all the hype.”
And honestly? I get it. If you open ChatGPT, ask it to help with something related to your work, and get back a generic answer that sounds like it was written by someone who vaguely knows your industry but has never set foot in your company — yeah, that’s underwhelming.
But the problem isn’t the AI. The problem is what’s missing.
The Brilliant New Hire
Here’s how I think about it. Imagine you hire someone fresh out of university. They graduated top of their class. They have an almost unfair breadth of knowledge — they’ve read everything, they’re incredibly fast, and they can think across disciplines in ways that most specialists can’t.
But they’ve never worked at your company. They don’t know your product. They’ve never met your customers. They have no idea what your competitors are doing, how your team is structured, what you tried last year that didn’t work, or why you made the decisions you made.
So when you ask them “How should we approach the Q3 sales push?” — they’ll give you a perfectly reasonable, textbook answer. And it’ll feel… useless. Because it’s generic. It’s based on what they learned at university, not on the reality of your business.
That’s exactly what happens when you use ChatGPT or Claude without giving them context about your company. You’re talking to the smartest new hire imaginable — who knows absolutely nothing about your specific world.
Mental Models Matter More Than Prompts
There’s an entire industry built around “prompt engineering.” Prompt libraries, prompt templates, prompt courses. I’ve never used any of them.
Here’s why: prompt libraries are like memorizing phrases in a foreign language without understanding the grammar. You might get by in a few specific situations, but the moment things go off-script, you’re lost.
What actually matters is having the right mental model — an intuitive understanding of what you’re working with and how to interact with it.
The “brilliant new hire” is my mental model for AI. Once you internalize it, a lot of things click into place:
- Why generic questions get generic answers — you wouldn’t ask a new employee to redesign your sales process on day one without briefing them first
- Why context is everything — the more background you give, the better the output, just like briefing a colleague
- Why AI sometimes makes things up — a new hire who doesn’t know the answer but feels pressure to be helpful might fill in the gaps with plausible-sounding guesses
- Why it gets dramatically better with company knowledge — because now your brilliant new hire has spent six months learning the business
This one mental model will do more for your AI results than a hundred prompt templates.
The Five Things People Actually Do Wrong
I’ve watched a lot of people use AI by now. Not in demos or tutorials — in real work. And the patterns are remarkably consistent.
1. Using it like a search engine
This is the most common one. People type in a question the way they’d type it into Google, get a paragraph back, and think “well, Google was faster and at least gave me links.”
AI isn’t search. Search finds existing information. AI generates new text based on patterns it learned. That’s a fundamentally different thing, and it requires a fundamentally different approach. You don’t search with AI — you work with it.
2. Not giving context
“Write me a marketing email” will get you a marketing email. A bad one. A generic one.
“Write me a marketing email for a B2B SaaS product that helps Swiss hospitals manage temporary nursing staff. Our audience is HR directors who are overwhelmed and skeptical of new software. The tone should be professional but warm, not salesy. Keep it under 200 words.”
Same tool. Completely different result. The difference is context.
3. Expecting magic from a single prompt
People type one carefully crafted prompt, look at the output, and either accept it or dismiss the whole thing. But nobody works like that — not with colleagues, not with freelancers, not with anyone.
Think of it as a conversation. Start broad, then refine. “Here’s what I need.” “That’s in the right direction, but make it more specific to hospitals.” “Good, now cut it in half.” “Actually, start with the pain point instead of the product.”
Three or four rounds of this will get you something genuinely good. One prompt almost never will.
4. Having the wrong expectations
Some people expect AI to be perfect. Others expect it to be useless. Both are wrong.
AI is more like a very fast first draft that needs your expertise to refine. It can do 80% of the work in 10% of the time — but you still need to bring the last 20%. Your judgment. Your knowledge of the specific situation. Your taste.
If you expect perfection, you’ll be disappointed. If you expect nothing, you’ll never discover what it can actually do.
5. Trying to solve everything in one go
This is the organizational version of the single-prompt problem. Companies try to automate an entire workflow with AI, fail because the first attempt is messy, and conclude that “AI isn’t ready for our use case.”
Start small. One task. One conversation. One small win. Then build from there.
What Changes When AI Actually Knows Your Business
Let me show you the difference, because it’s dramatic.
Without company knowledge:
You ask: “How should we position against our competitors?”
You get: A generic framework about differentiation strategies. Porter’s Five Forces. The usual MBA stuff. Correct, but useless.
With company knowledge:
You ask the same question. But the AI has access to your documented competitor analysis covering six specific companies. It knows your product roadmap. It knows your pricing strategy. It knows what clients have told your sales team in the past month.
Now the answer isn’t generic — it’s yours. It references specific competitors by name, identifies where your roadmap gives you an advantage, flags a pricing risk one competitor just introduced, and suggests a positioning angle based on actual client feedback.
Same AI. Same question. Completely different value.
The Knowledge Gap (And Why It’s So Hard to Close)
This brings me to the part that most people skip over because it sounds boring: knowledge management.
I know — the moment someone says “knowledge management,” eyes glaze over. So let me put it differently.
Think about the difference between someone who just joined your company yesterday and someone who’s been there for ten years. The ten-year person knows things that aren’t written down anywhere. They know which clients are difficult and why. They know why a certain process exists (and which part of it is outdated but nobody bothered to change). They know who to call when something breaks. They know the unspoken rules, the history, the context behind the decisions.
That’s the knowledge I’m talking about. And right now, almost none of it is accessible to your AI.
This isn’t a technology problem. It’s an organizational problem. Most companies don’t write this stuff down — not because they’re lazy, but because it’s never been worth the effort. Who’s going to read a document about why the pricing structure changed in 2023? Nobody, usually.
But AI changes the equation. Suddenly, writing things down has a direct, tangible payoff: your AI gets smarter. Every piece of context you give it makes every future interaction better. Not just for you — for everyone on your team who uses the system.
Three Types of Knowledge Your AI Needs
Not all knowledge is the same, and each type requires a different approach to capture. Here’s how I think about it:
External rules and regulations
These are facts that exist outside your company — laws, industry standards, compliance requirements. You can’t rely on what the AI “learned” during training, because this is exactly where hallucinations happen. An AI that confidently cites a regulation that doesn’t exist is worse than an AI that says “I don’t know.”
What I did at Carewell: I used Perplexity to do deep research on all the relevant Swiss labor regulations for temporary staffing, prioritizing official government sources. Then I imported that research into our knowledge system. Now when anyone asks a regulatory question, the AI answers from verified sources — and flags it as a regulatory topic that should be double-checked before acting on.
Product and operational knowledge
This is what your company builds, sells, and does. Product documentation, processes, workflows, pricing — the stuff that already exists somewhere but is scattered across wikis, slide decks, and people’s email inboxes.
Some of this you can import directly. But some requires creative approaches. For our product, I built a simple browser extension that takes annotated screenshots of every screen in our application. It sends each screenshot to an AI model that writes a detailed description — what the screen shows, how the interface works, where you can navigate from there. I imported all of this into our system. The AI can now understand our entire application and answer specific product questions about any feature or screen.
The knowledge in people’s heads
This is the hardest one and the most valuable. It’s everything your team knows but has never written down. What they’ve learned from customer conversations, from fixing problems, from trying things that didn’t work.
At Carewell, I created detailed questionnaires for each team member — tailored to their role. The CEO got questions about strategy, market positioning, and company vision. I got questions about product management and AI architecture. Sales got questions about customer patterns and objections.
It sounds simple, almost trivially so. But the output was incredible. Suddenly the AI had access to perspectives and knowledge that had only ever lived in individual people’s heads.
You Don’t Need Prompt Libraries. You Need Systems.
Here’s something counterintuitive: the better your AI system is set up, the less your prompts matter.
When I talk to our AI system at Carewell, I often just dictate a page or two of rambling thoughts. It’s not structured. It’s not optimized. I basically think out loud. And it works beautifully — because the system already has so much context, so many instructions about how to behave, so many constraints built in, that even a messy input produces great output.
Compare this to someone using a blank ChatGPT session with a carefully crafted prompt from a prompt library. They’ll get a decent result for that one specific use case. But they have to start over every time.
That’s the difference between memorizing phrases and speaking the language. Between having a recipe and knowing how to cook.
If you want to see what that system actually looks like in practice — the architecture, the knowledge repository, the daily habits — I wrote a step-by-step guide: Building a Company AI Operating System.
Start With the Mental Model
You don’t need to build a whole knowledge system tomorrow. But you can start thinking about AI differently right now.
Next time you open ChatGPT or Claude, try this:
- Before you ask your question, spend 30 seconds briefing the AI like you’d brief a smart new colleague. Who are you? What does your company do? What’s the context for this task?
- Don’t accept the first answer. Push back. Ask follow-ups. Say “that’s too generic” or “make it more specific to my situation.”
- At the end, notice how much better the conversation was than just typing a question into a void.
That’s the mental model at work. And once you’ve experienced the difference, you’ll start wanting to make that context permanent — which is exactly what a Company AI Operating System does.
The AI isn’t disappointing. It’s waiting for you to teach it what it needs to know.
Published: 2026-02-26
Last updated: 2026-02-26