The AI Sandbox

For two years, I watched my daughter use Khanmigo. It possessed every advantage available to an AI tutor. Sal Khan and the team had early access to OpenAI’s models, a deep understanding of digital learning, a robust knowledge canon, and a platform reaching millions of students. If any AI tutor was going to work, it was this one. What I saw was something else. Her usage has been sporadic. She has mostly engaged with Greek Gods to discuss the nuances of Rick Riordan’s books. Despite having access to this on her home computer, she waits for the time-gated access her school’s ChatGPT provides. 

The AI is not the problem; the role is. Assigning AI a human persona in learning is the failure.

Where the Tutor Model Fails

The tutor model fails because the AI is directing the interaction when the learner should be. Khan Academy wanted students to ‘persist’ and largely a Socratic dialogue was the chosen mechanism. But a student with a paper or homework due tomorrow isn’t there to persist, they have a deliverable and a deadline. A tutor who answers questions with questions when the learner needs an answer will get “IDK”, until either the student or the AI gives up. Socrates sought introspection, moving the learner from ignorance toward self-awareness. The AI tutor mimics the form and skips the purpose. No matter the prompt, Khanmigo tacks on another question, algorithmically extending the interaction. Socrates was and remains a pain in the [please remain seated], but his inquiries had the introspective grounding to earn it. An AI tutor has no standing to direct a discussion. It is performing a Socratic dialogue without the Socratic part.

What the Research Shows

Caitlin Morris, from the MIT Media Lab, in a recent Substack post drawn from her research paper (both worth the read), described this pattern from a different angle. She ran a study with ~150 students receiving the same AI-generated feedback, labeled either as AI or as a human teaching assistant. Students who believed a human reviewed their work spent 28% more time on task and wrote more complex code, even though they rated the feedback itself as equally useful. Morris points out that students who didn’t believe the ‘human’ feedback was authentic performed even worse. 

The Modern Learner

In their book, The Disengaged Teen, Jenny Anderson and Rebecca Winthrop describe four mindsets students bring to learning. The promise of Khanmigo was that it could reach the passive and resisting students who need the most help. When a student sees homework as a deliverable with a deadline, the opposite happens. An AI tutor questioning a passive student hardens the disengagement. A resisting student learns to fake the dialogue. Khanmigo gets abandoned for open generative AIs that hand over answers without guardrails.

We are codifying digital disengagement.

Despite its ability to sound human, AI is not a person and cannot occupy a human role. Maybe the answer is to stop thinking of AI as a role at all.

Think of Minecraft as a Sandbox

Minecraft isn’t a role, or even really a game, it is a place. There are rules, sure, paths, but mostly it is an open sandbox to play around in, and whatever direction you choose, it dynamically builds a place for it. 

An AI sandbox is a safe place to break things, follow curiosity, and build without the weight of a grade or a deliverable. The learner directs the interaction. An AI tutor, by design, cannot do this. 

Sandboxes are the practice space without defined outcomes.

What makes Minecraft a great sandbox? It has structure without a predetermined outcome. You can build anything, but you have to learn the materials to do it. That’s not just a place to play. It’s a place that rewards learning to direct yourself.

What would make Khan Academy a great sandbox? It could offer a safe, learner-directed space for discovery, like when my daughter chats with Athena. 

What This Could Look Like

Imagine an English teacher with a passive or resisting student who lights up when discussing the movie Project Hail Mary and its science. Assigning the 500-page book might arrive as a cudgel to the teen’s interest. The novel and the movie cross multiple disciplines. That’s the perfect access point for an AI sandbox. 

The direction remains human, initiated by the teacher, driven by the student. “Chase the biology of Astrophage. Can Xenon be a metal? Find out how science shapes the fiction. Write about what you find; show me your prompts and what you discover.” Like Minecraft, the AI can follow the inquiry wherever it leads. In the sandbox, the learner directs the inquiry. Because it isn’t an open generative AI, it can decline to act on the prompt “Write me an essay on Astrophage.” Sandbox guardrails could stop the most egregious bypassing of thought.

The sandbox produces a visible record of inquiry. When students use generic AI on a personal device, exploration may happen, but the teacher only sees the output, not the inquiry. The value of an AI sandbox, especially in a Khan Academy setting, is as much in what the questions reveal about the learner as in the answers it provides.

The AI Pivot

Students are already using AI; mine is. General-purpose technologies do not retreat, and AI is no exception. The question is not what role AI should play, but what place it can earn in education.

This generation will be AI native. They will direct it or be directed by it.

Passivity and direction are different mindsets, and the world they're walking into rewards one and not the other. Trust teachers, parents, fellow students, even the family dog, and not AI, to do the work only humans can do. Show empathy and patience, and persist past the passivity. Use AI as a place where students bring their curiosity and drive the inquiry to direct their learning. 

Khan Academy already has all the elements needed to reframe Khanmigo as a sandbox. The platform, the knowledge canon, the model access, and the trust of millions of learners. What needs to change is not the technology. It is the frame.

AI has no role in learning, but it might have a place.


Read on:

The Disengaged Teen | Helping Kids Learn Better, Feel Better, and Live Better

Jenny Anderson and Rebecca Winthrop

Learning in the Open: What AI Is (and Isn’t) Changing. | Khan Academy Blog

The Motivation Ceiling | Better AI tutoring can’t solve a social motivation problem; new research hints at why. Substack | Caitlin Morris

When Peers Outperform AI (and When They Don’t): Interaction Quality Over Modality | Arxiv | Caitlin Morris, Pattie Maes

Previous
Previous

Do, Know, Decide

Next
Next

Mustafa Suleyman's Calculator