“We’re trying to put forward tools and research where AI becomes part of new expressive capabilities for students, rather than information dumpers into people’s heads.”
Stanford education experts put AI into perspective
One of the last assignments in Stanford’s renowned introductory coding course, Computer Science 106A, is “Infinite Story,” a task that asks students to code an adventure game from scratch and then use ChatGPT to keep it going, like a Choose Your Own Adventure story with no ending.
Given widespread fears about AI replacing coding work, the course’s embrace of the technology might seem surprising. But for 106A instructor Mehran Sahami, the decision is both pragmatic and optimistic.
“The tools do add productivity, industries are expecting students to use them, and students are going to use them, even if they are ‘banned,’” said Sahami, the Tencent Chair of the Computer Science Department and the James and Ellenor Chesebrough Professor in the School of Engineering. “So, rather than attempting to forbid use of AI until graduation, why not try to figure out what form of instruction creates a solid understanding of the material while taking advantage of these tools?”
Rather than focusing on banning AI in schoolwork, many experts see genuine potential for AI in education. Although they are wary of overpromising what it can do and underestimating its negative effects on learning.
“What we’re going to be grappling with in the near future is the question: what do we want people to be learning to do?” said Karin Forssell, director of Stanford’s AI Tinkery and GSE Makery, and of the Learning Design and Technology master’s program at the Graduate School of Education.
The AI boom has everyone playing catch-up. That means more trial, error, and rigorous research to understand AI’s possible futures in schools.
AI requires change
Without a ban on AI, educators must differentiate between what students should learn without AI assistance and what defines professional AI skill development in a world where most professionals are also unsure how they’re supposed to apply AI on the job.
In CS 106A, Sahami and his co-instructors want students to know the fundamentals before applying AI to their projects. At the same time, Sahami noted that many companies require that computer science new-hires be comfortable using AI in their daily work. Lessons about AI given during CS 106A are not limited to using the tools, but also include discussions of ethics, such as bias, fairness, and how the choice of data used to train AI models impacts the results they produce.
“By the time students get to their senior project, there’s an expectation that they’re using some AI tools to help produce that larger piece of work,” said Sahami. “That said, they have to have some fundamental knowledge to be able to assure that the code is doing the right thing.”
Many developers and researchers in the education space are focused on how AI can speed up – or even replace – existing teaching or learning tasks. The fear is that this could lead to AI replacing valuable learning and teaching.
“I’ve been calling it a ‘gold rush’ moment because lots of people are flocking to AI to find a gold vein that they hope will change everything for the better,” said Forssell.
That rush has led to widespread promotion of immature products. But that doesn’t mean that all AI tools being offered now are untested or unworthy. For example, Sahami likes the idea of Khan Academy’s Great Gatsby chatbot, which allows students to chat with the book’s protagonist, Jay Gatsby. In a completely different application, Stanford Law School’s Legal Innovation through Frontier Technology Lab is testing how AI-driven simulations of real-world scenarios can provide law students with another option for mastering negotiation and other legal practice skills.
“As we figure out how to use these tools better, I’m optimistic that there are real opportunities to incorporate them into educational experiences that allow students to engage more deeply with material or receive more personalized instruction or tutoring,” said Sahami. “Now, the pessimist in me says, who gets access to those opportunities?”
Victor Lee, associate professor in the Graduate School of Education, is also faculty lead at the Stanford Accelerator for Learning’s initiative on AI and Education, which brings together experts from all seven schools with practitioners to research, discuss, and explore the realities of AI in education, and develop widely accessible and robust tools to help educators navigate and teach about the AI revolution.
“We are really pushing toward creative learning and learning through creation,” said Lee. “We’re trying to put forward tools and research where AI becomes part of new expressive capabilities for students, rather than information dumpers into people’s heads.”
These tools include resources like the Classroom-Ready Resources About AI For Teaching (CRAFT) program, a co-design initiative from the Stanford Graduate School of Education and the Stanford Institute for Human-Centered AI. CRAFT is a collection of free AI literacy resources for high school teachers, which Lee’s team created alongside teachers from around the country who specialize in a variety of subjects. Within CRAFT, educators can find lessons on how AI can be used – whether to make music, fight climate change, or explore archaeological sites – and modules that delve into bias, privacy and safety, copyright, environmental cost, and many other concerns about AI.
Even when educators find the right balance of AI and AI-free work, there is the question of enforcement. Some instructors are increasing their use of handwritten or oral exams. Forssell hopes that AI-enabled education is an opportunity for raising and stretching expectations of student work and their own teaching.
“I think there’s a possibility that we will be revisiting what good writing is with students, and we will not accept a mediocre essay, because that’s what this tool can give you,” she said. “There could also be, for example, an increase in assignments that students take out into the world to be evaluated for their contributions to their communities.”
What about the evidence
Two popular questions in AI in education are whether it leads to more cheating and whether it reduces the burden of lesson planning on teachers. Many people assume “yes” to both, said Lee. He isn’t so convinced. And he uses these subjects as arguments for a more rigorous understanding of these technologies’ real-world effects.
“It’s important for everybody to pause and think: What do we know and where do we have evidence for this?” said Lee. “Because a lot of the narratives around AI seem to fit our convictions or our fears. As an academic, I would want to see what the evidence is, and I’m willing to put in some time before I reach conclusions.”
“There are still a lot of people saying, ‘Let’s just try to ban the tool.’ And part of that is momentum. Part of it is that the tool didn’t exist before,” said Sahami. “There was a particular way that teaching was done and the easiest thing to do is to continue that teaching in the same way, which doesn’t fit with using these tools. I don’t think that’s a good long-term answer.”
While instincts, momentum, and assumptions play strong roles in discussions about AI education applications, Lee and his lab have actually studied academic integrity and teacher prep tools, among other AI uses. So far, their research suggests AI is changing how people cheat – but doesn’t change the overall number of students who are cheating compared to before AI.
As for AI being the perfect way to make teaching easy, faster, and more personalized to individual students?
“I was very hopeful that AI would free up teachers from burdensome tasks, so they could focus more on teaching students,” said Lee. “However, as we consider historically how some innovations intended convenience can sometimes lead unexpectedly to more things to manage than before, which is why we’re investigating that further.” He pointed to social media and email as examples of innovations that may not have produced the intended benefits.
For example, Lee wonders whether having AI draft notes to parents or lesson plans actually saves time or if that could lead to more meetings or more time spent revising. In hopes of capturing the reality of these tools, he has a project underway to collaborate with teachers and districts to develop and study how teachers are integrating AI into their workflows to increase efficiency.
To help the Stanford community develop more informed opinions of AI, the AI Tinkery hosts “Tinker Time,” where staff walk visitors through a variety of hands-on activities. For example, an instructor can try out a common AI lesson planning tool and ask the kinds of questions that researchers also ask, like: What is the tool good at? What is it not? How would you have designed this lesson without this? Did you bring your expertise to bear in a way that improved the output?
“When people play with those tools and really look at the output, they start to learn what goes into AI and where its limitations are,” Forssell said. Limitations can include assumptions about the teacher and their students or outputs that might improve upon the work of a first-time educator, but not an expert’s.
How to adapt education in a world with AI
Although AI in education is a rapidly evolving conversation, researchers do have concrete suggestions for how we can adapt alongside it. Following their lead, the path forward is about staying open to AI’s opportunities while being mindful of potential pitfalls on societal, global, and personal levels.
Lee sees a need to improve research on the nuances of AI literacy – specifically, how to encourage reasonable relationships with AI technologies across a wide population. Age and career can affect what we assume about AI. Moreover, many people operate off snap judgements: that AI is either well-informed and trustworthy or innately deficient and potentially hostile. This work is multi-faceted, encompassing topics such as how AI works, bias and misinformation, data privacy and security, plus questions that change daily due to the fast-paced advances in these technologies.
Faculty mentioned in this article: Karin Forssell , Victor R. Lee
