As generative artificial intelligence continues to evolve at a rapid pace, educators, parents, and policymakers face a defining question: How do we harness AI’s immense potential in the classroom while ensuring students still develop the critical thinking skills they need for life? Harvard faculty members Michael Brenner, Tina Grotzer, and Ying Xu recently explored this question on the podcast Harvard Thinking, offering insights that are as practical as they are thought-provoking.
The Core Tension: Enhancement vs. Replacement
The debate is no longer whether AI belongs in education — it is how to integrate it responsibly.
Michael Brenner, Catalyst Professor of Applied Mathematics at Harvard’s School of Engineering and Applied Sciences, believes that avoiding AI in academic settings is not an option. “I feel it would be irresponsible for me not to embrace it entirely, both in the classroom and in research,” he said. In his view, educators and students who fail to engage with these tools risk falling behind — both in their careers and in their capacity to advance science.
However, cognitive scientist Tina Grotzer of Harvard’s Graduate School of Education cautions that reports of AI negatively affecting students’ social, emotional, and cognitive development must be taken seriously. The challenge for educators is thoughtful integration: using AI to enhance learning, not to circumvent it.
What Is at Stake: Critical Thinking and Foundational Skills
Learning involves far more than memorizing facts — it builds the capacity to think.
Grotzer emphasizes that education has long operated as a knowledge economy, focused on accumulating information students can apply in the real world. But learning also means developing metacognition — understanding how your own mind works, what critical thinking looks like, and how creative reasoning unfolds. Many students, she notes, don’t even realize these are skills that must be cultivated.
Ying Xu, assistant professor at the Harvard Graduate School of Education, frames learning along two dimensions: what students learn and their ability to learn. “Critical thinking is one of these very important foundational capacities,” she explains — and it is precisely this capacity that educators fear AI may undermine when students outsource their thinking to a chatbot.
Redesigning the Classroom for the AI Era
Harvard professors are already rethinking assignments to push students further.
Professor Brenner made a bold classroom decision after discovering that the AI tool Gemini could solve his entire graduate-level mathematics problem set. Rather than continuing with traditional assignments that chatbots could simply complete, he redesigned the course entirely. Each week, students were tasked with inventing problems that AI could not solve — and were awarded extra credit if their problems stumped the best available chatbot. By semester’s end, 60 students had collectively produced 600 AI-resistant problems, which were later published as a co-authored academic paper. Final exams were replaced with oral assessments where students had to solve and explain problems they themselves had created.
“I think that because we have AI, students should do more, they should solve harder problems. They should learn more,” Brenner said.
Grotzer has taken a similarly reflective approach, engaging her students in honest conversations about why they are in school and what they lose when AI does their thinking for them. She highlights productive AI use — such as using it to quiz oneself, explore alternative perspectives, or stress-test instructional designs — versus passive reliance that bypasses genuine cognitive engagement.
Age-Appropriate AI: What Parents and Educators Need to Know
Introducing AI tools to children requires careful consideration of developmental readiness.
Xu notes that one of the most common questions she receives from parents is: At what age is it safe to introduce AI tools? Her answer depends heavily on the type of tool. Specialized educational tools designed to teach phonics, math, or science carry different implications than general-purpose AI assistants. The latter, she argues, require a level of self-regulation that many young students have not yet developed.
A survey of 7,000 high school students conducted by Xu’s team revealed a troubling pattern: nearly half acknowledged they were relying on AI too much for their learning, and over 40 percent said they had tried to limit their use but failed. This underscores the importance of building self-regulation as a foundational skill alongside — not after — AI literacy.
The Irreplaceable Human Element in Learning
Relationships, motivation, and social-emotional context cannot be replicated by machines.
Both Grotzer and Xu stress that what makes education transformative is not the exchange of information alone. Human tutors manage motivation, respond to emotional cues, calibrate how much support to offer, and build relationships over time. Xu’s own research found that while AI tutors and human tutors sometimes produced similar learning outcomes in terms of information retained, students reported significantly higher engagement, enjoyment, and confidence when working with human instructors.
In one revealing experiment, Xu gave two groups of students identical essay feedback — but told one group it came from AI and the other it came from their professor. Students who believed the feedback was human-generated rated it as significantly more useful. The implication is clear: students learn not just from content, but from knowing that someone cares about their growth.
Rethinking the Purpose of Education
AI forces educators to ask not just how we teach, but why.
A survey question posed to 7,000 high school students — asking whether subjects like math and English still felt important in the age of AI — revealed a significant drop in motivation. Xu calls this “a wake-up call” for educators to restructure learning so it connects with what students actually want to achieve in their lives.
Grotzer advocates for metacognition as a new pillar of educational purpose: teaching students to understand their own minds, recognize what human cognition does that AI cannot, and make intentional choices about when to delegate tasks to technology and when to engage their own thinking. “Once you start to know what your mind can do that’s so much better than AI,” she says, “it kind of makes sense that some tasks are well-relegated to AI and other tasks are not.”
Xu adds an important perspective for parents: AI is just one part of a child’s developmental ecosystem. Time with family, friends, hobbies, and nature remain equally vital. “What does matter is how it fits into the larger ecosystem of a child’s life,” she concludes.
