Model Citizens

March 2, 2026

You've seen the demos. An AI agent handles a complex multi-step workflow. It writes the email, updates the CRM, fires off the Slack message. Impressive.

Then you try to build that for your company and discover it doesn't work. Not because the technology is broken. Because the agent doesn't know anything about you.

It doesn't know you call them "tech packs," not "product specs." It doesn't know that "approved" in your process means approved by Sarah, not by the system. It doesn't know that orders from the EU go through a different compliance check. It doesn't know any of the hundred things that everyone on your team knows without thinking about it.

General AI is brilliant at general tasks. Your work is not a general task.

The context problem

Here's what you get out of the box: a model that's been trained on the internet. It knows a lot. It can write well, reason through problems, summarize documents. It has seen most of the patterns that exist in most industries.

What it hasn't seen is your company. Your processes, your terminology, your standards, your exceptions, your history. The things that make your business work the way it works instead of some generic version of your industry.

You can't prompt your way around this gap. A lot of people try. They write long system prompts explaining the company. They paste in documentation. They add context at the top of every message.

This works up to a point. It breaks down at scale, at edge cases, and at the specific institutional knowledge that nobody has ever bothered to write down because everyone already knows it.

The agent is smart. It just doesn't know enough. And a smart person who doesn't know enough is dangerous in exactly the way a confident first-day employee is dangerous — they'll make the right move for someone else's company, not yours.

What a model citizen actually needs

There's a phrase for employees who've internalized how an organization works: they're "good cultural fits." They know what's expected without being told. They handle situations the way the company would want them handled, including the situations no policy document covers.

That's the bar for an agent. Not technically capable. A model citizen.

Getting there requires three things that general-purpose AI tools don't provide:

Persistent company memory. Not just a system prompt you refresh manually. A living knowledge base that captures how your company works — and updates when things change. When the process changes in March, the agent needs to know. When a new client has specific requirements, those requirements need to be in there. This isn't a document. It's a continuously maintained model of how you operate.

Learning from corrections. Every time someone says "that's not how we do it" — that correction is gold. The agent just learned something specific about your company that it couldn't have learned from training data. A system that captures those corrections and applies them going forward gets better in a way that a general AI tool cannot. It's learning your company, not just tasks.

Domain-specific orchestration. Getting an agent to do real work in your specific context isn't just about plugging it into your tools. It's about understanding the sequence of your process, the dependencies between steps, the decisions that require human judgment versus the ones that don't, the exceptions that route differently. This is not something you can configure in an afternoon. It's a design problem. The orchestration has to match your actual workflow, not a generic template.

Why most AI deployments fail

They skip this work.

They take a general-purpose model, give it tool access, write a brief system prompt, and call it an AI agent. It does something impressive in the demo. In production, it falls apart on anything that requires real company knowledge.

The failure isn't the AI. It's treating AI deployment as a configuration exercise instead of a context-building exercise.

The context-building is the product. The AI is just the capability that runs on top of it.

The payoff

When you get this right, the shift is immediate and obvious.

The agent stops needing you to explain things. It already knows. You stop correcting the same mistakes. It learned. The output stops sounding like thoughtful generic AI and starts sounding like someone who's been at your company for two years.

More importantly: the agent starts handling the edge cases correctly. Not because you anticipated every edge case in a prompt. Because it's absorbed enough about how your company works that it can reason its way to the right answer in situations it hasn't explicitly seen before.

That's what a model citizen does. It carries the institutional knowledge forward, applies it consistently, and handles the unexpected the way your best employees would.

Building that takes real work. It's not magic, and it's not a prompt. But it's the only version of AI deployment that actually makes a dent in how much you have to do.