Do, Know, Decide

Why your AI can't tell the difference between performing and knowing, and why that's your job?

A program sponsor sent me feedback this week regarding a first-time manager piloting our management AI assistant. What was the difference between using this agent, a generative tool, or just Slacking the boss? Mode confusion was blocking adoption.

All three will respond. Each with a different intention, and a different outcome for the user. The distinction lies in the three levels to the ask: DoKnow, and Decide.

DO is generative. Drafts, summaries, or code. The output is the artifact. Prompt skill matters because intent shapes output. ChatGPT, Claude, Gemini are Do tools. They will generate a plausible answer to any management question. The answer may be right, wrong, or middling, and the user has no way to tell from the response. 

KNOW is constrained. A real knowledge canon includes a framework, a documented body of work, and verified sources. An AI agent connected to that canon performs retrieval against bounded material rather than generating from the open web. The output isn’t generated. It’s drawn from a corpus that human authority stands behind.

DECIDE is accountable. Someone takes action. Someone owns the outcome. Either you hold the authority yourself, or you connect to a human who does, an experienced colleague, a manager, or an authoritative source. Despite AI’s best impression, DECIDE authority is always human. An AI has no standing here.

Three modes. Three different jobs. AI presents all three the same way, as confident text in a chat window. At scale, it produces far worse outcomes. People trusting performance for knowledge, accepting generation as judgment.

AI literacy separates the modes. The AI-literate user asks: 

  • What is performative? 

  • What is normative? 

  • What is trying to supplant human wisdom, and what is assisting it?

Assisting the first-time manager means giving them an understanding of AI’s limits: handing tasks to generative tools, building knowledge from the AI assistant, and going to human authority when the decision is real.

A properly configured AI assistant offers a safe place to practice and build the knowledge that supports future decisions. A generative AI performs the wisdom it cannot hold and atrophies the knowledge of the user who accepts it.

The AI Literacy Program at Assisting Intelligence builds the skill to separate what AI performs from what it knows, and from what only you can decide. The program includes AI assistants designed around bounded knowledge, not open generation. Start at learn.assistingintelligence.com.

Next
Next

The AI Sandbox