OECD: Nepo Babies on Segways
What 247 Pages of Research Reveals About AI and Learning
I’ve started thinking of Generative AI as the nepo-baby intern your VC insisted you hire.
Glittering résumé. Impressive vocabulary. Tireless work ethic. Obvious potential—despite the parents. But also: wildly uneven judgment, no sense of how the real world works, and a habit of firing off work so confidently that you’d be reckless not to check every word.
The OECD Digital Education Outlook 2026 is a 247-page examination of AI in education systems worldwide. The key finding: Receiving outputs without effort actively harms learning. The report also shows a better path. One where AI challenges us to achieve results we couldn’t reach alone.
Here are five insights that connect directly to corporate AI learning systems:
Insight #1: The Mirage of False Mastery
Here’s the data for what we intuitively know: AI is a crutch. The ‘Vending Machine’ approach (getting outputs without effort) provides a performance boost that masks a collapse in actual learning. Students and workers who rely on AI as a crutch perform worse when it’s removed than those who never used it (p. 78).
Insight #2: From Vending Machines to ‘Synergistic Teaming’
The report proposes a dialectical performance where humans and AI challenge each other to produce results neither could achieve alone. This works when the AI acts as a negotiator between learning objectives, the expert, and the learner. The risk? Without a knowledge canon to ground it, we create an Artificial Sophist (all persuasion, no truth—replacing the crutch with a Segway). But when expert knowledge tethers the AI, the report found “the emergent competence is likely to exceed the maximum of individual AI or human competence.” (p. 138)
Insight #3: Respecting Learner Agency
Using generative AI to auto-author a course is like beating the learner over the head with the crutch. It is not just unhelpful; it is actively hostile to agency. Learners become frustrated when AI ignores their input or responds generically, regardless of user feedback. The challenge, as the report notes, “is maintaining the delicate balance between support and independence.” Building these systems requires iterative refinement and rigorous evaluation—not a one-and-done generative script. (p. 77-80)
Insight #4: Rethinking Product vs Process
Educators are shifting focus from the final product to the process of creation. In business, this means “breadcrumbing” the interaction history between human and AI becomes a new form of IP, insight, and quality assurance. Teach learners to document their AI interactions. In one medical study reported in the OECD, assessing ‘how’ the diagnosis was reached mattered as much as ‘what’ the diagnosis was. (p. 53)
Insight #5: Hybrid Human-AI Skills—Or De-Skilling in Disguise?
The report highlights the critical question: When should we require effortful thinking from the learner? If the answer involves a crutch, organizations aren’t just missing opportunities. They’re actively de-skilling their workforce. But the report shows a better path. When organizations invest their critical thinking, decide where and when to use AI, and embed their insight and IP into custom agents, they create something different. These RAG systems (retrieval-augmented generation) don’t just answer questions. They carry your organization’s intelligence forward.
AI can assist in learning. The report empirically demonstrates this. AI can help us achieve results we couldn’t reach alone. But everything in this report, across 247 pages and hundreds of studies, proves the same point: The easy path (AI as vending machine) leads to de-skilled workers and false competence. The harder path, building AI that challenges learners, respects their agency, and carries your organization’s expertise, requires real work. But it’s the only path that leads anywhere worth going.
It’s interesting to see how an AI interprets the title of this post. Image from Google Gemini.

