Victor Lee, associate professor at Stanford GSE

AI in the classroom: Equity, creativity, and teaching

On this episode of School’s In, Victor Lee talks about the rise of generative AI in education and its implications for how we teach and learn.
September 12, 2024
By Olivia Peterkin

Since generative artificial intelligence (AI) gained popularity two years ago and quickly found its way into school settings, there’s been concern over how to use the tool ethically and effectively in educational settings. Part of its adoption in the future will include figuring out what responsible AI use looks like, and how educators can best prepare students to navigate a world where AI plays a central role.

“It’s good to have conversations in school to talk about how AI works, that it’s a computer system trained on information that it happens to have available, and not all the information’s right,” said Victor Lee, associate professor at Stanford Graduate School of Education (GSE) and faculty lead for the Stanford Accelerator for Learning’s initiative on AI and Education.

On this episode of School’s In, Lee joins hosts GSE Dean Dan Schwartz and Senior Lecturer Denise Pope as they discuss the limits of generative AI, myths about its use, and what can be done to ensure its equitable use as an educational tool.

Lee’s research taps into research-practice partnerships and involves design, implementation, analysis, and the revision of new learning experiences in educational settings like schools, school districts, and libraries. Some of that work is being done through the CRAFT AI Literacy Resources Project, an initiative to develop and provide free curricular materials about AI to high school teachers.

“With CRAFT, we’re working with teachers from around the country to build lessons and resources (related to AI) that they can use,” Lee said. “And make it an open discussion in class to see how we can think about using this responsibly for all the different kinds of work that we expect our young people to be doing as they grow up.”

To keep up with our research, subscribe to our newsletter and follow us on Instagram, LinkedIn, and Threads.

Never miss an episode! Subscribe to School’s In on Spotify, Apple Podcasts, or wherever you get your podcasts.

Victor Lee (00:00):

It is not the same as a human. It can do things that are pretty human-like, but it's still not human. And that's really important because humans make really life-important critical judgments, and we need to make sure humans are still doing that.

Dan Schwartz (00:18):

Today we're diving into a topic that may change the landscape of education. This is the rise of AI and its implications for how we teach and what students should learn. So artificial intelligence isn't all that futuristic anymore. It's happening right now and it's happening quickly. So how do we use AI responsibly in schools? How to prepare students to navigate a world where AI is going to play such a central role?

Denise Pope (00:43):

Absolutely. Dan, things are really moving fast. It's a little scary, right? There's a lot of potential for AI in education, but it's also a little bit freaking me out, right? There's a lot of misconceptions about AI. So I'm really glad we have an expert here. Let's get into our episode.

(01:03):

Welcome to School's In, your go-to podcast for cutting-edge insights and learning. Each episode we dive into the latest trends, innovations, and challenges facing learners. I'm Denise Pope, senior lecturer at Stanford GSE and co-founder of Challenge Success. And I'm here with my co-host, Dan Schwartz, Dean of Stanford Graduate School of Education, and the faculty director of the Stanford Accelerator for Learning.

Dan Schwartz (01:31):

Hi Denise. I hope you're doing well. I'm not going to let you quite get away with that. What's scary about it?

Denise Pope (01:37):

What's scary about AI? I just think how... It feels like the robots are taking over the world. And it feels like everywhere you look, there is someone trying to make a buck off of AI, which I'm not sure they're really putting the students in the center. So yeah, I think it's a little scary. It's a little scary.

Dan Schwartz (01:56):

Okay, that's scary. I buy it. I think this is going to be a fun show. Of course, all our shows are fun, mind you. But right now it's all AI all the time. I meet with a lot of people and everyone wants to talk about AI. It's sort of the number one question after. Does my kid use the cell phone too much? I'm thrilled to have with us Professor Victor Lee, our very own professor at the Graduate School of Education and an expert in artificial intelligence and education. Victor's done a lot of work in technology and learning, but right now he's really focused on how AI can be integrated into classrooms and the potential it has to enhance and complicate the learning process. Victor, thank you so much for joining us. So here's the first question, to give people an orientation. Can you walk us through what educators and students need to know about AI at a high level?

Victor Lee (02:45):

Well, sure. I mean, up until last decade, been hearing about, oh, AI is the kind of thing that might win a game of Go or might win a game of chess. But since 2022, it was pretty impressive that we started to see AI that could produce new content. And that's something that we just didn't expect AI to be able to do, and that's the kind that we call the generative AI, and that's what ChatGPT does. I mean, ChatGPT is sort of like a conversational partner, a chatbot, and you can ask it to write poems for a friend's retirement or to write instructions to play chess or a recipe for you. And those did not necessarily exist in the forms that ChatGPT writes them, and that's been quite impressive. But it's not just text, it's also going on to video, sound, music, images. So we're just seeing this whole new world in terms of what we thought of as being really amazing bots or game playing devices now can actually be partners in creating content.

Denise Pope (03:47):

Can I say one thing that I think would help our listeners? Because I did not know this. ChatGPT is like Kleenex, in that Kleenex is a brand of tissue that you use to blow your nose, but everybody now calls all of those tissues Kleenex, even though there's different names for them. So let me just make sure I'm getting this right. There are other names for generative AI and AI that uses large language models besides ChatGPT. So I don't want people to be confused. And we're just kind of... People talk about ChatGPT as if it's all the same. There's Bard. I don't even know, has Bard been discontinued? Whatever Microsoft just came out with has a different name. So I don't want people to be confused by all the different names. They're all doing this sort of generative AI task that we're talking about. Is that right, Victor?

Victor Lee (04:38):

Yeah, that Kleenex example is great, although it's really hard for me to not picture somebody blowing their nose into the AI. But yeah, it's sort of the brand name. It's a big brand name. It's one of the most, well-known, but it's not the only one. We can blow our nose into many other products and we can use AI coming from many other different brands, including Google and smaller companies that are still showing up on the market.

Denise Pope (05:03):

Okay, I just wanted to clear the air. All right, Dan, I know you have a big question.

Dan Schwartz (05:07):

No, no. Now that we've sort of given the backdrop, I want to get to the meat of the show. So Victor, what should kids and teachers know about AI?

Victor Lee (05:16):

Well, there's a lot of things that they're going to need to be thinking about. I mean, one is that it is not the same as a human. It can do things that are pretty human-like, but it's still not human. And that's really important because humans make really life-important critical judgments, and we need to make sure humans are still doing that. It also means that it doesn't have the same way of thinking about telling the truth or being accurate in the way that humans are socially accountable for. So that's something else is that while it may sort of sound human or realistic, the information we don't know is reliable because it's sort of generating it on the fly and it doesn't have the capacity like we do to kind of check. Was that right? Did I say something for real? And it makes stuff up. I mean, that's what they call the hallucinations in generative AI.

Dan Schwartz (06:07):

I make stuff up.

Denise Pope (06:08):

I was just going to say-

Dan Schwartz (06:08):

It doesn't feel that deep.

Denise Pope (06:10):

I know people who lie.

Victor Lee (06:13):

For sure, but when you do it, you're doing it intentionally, hopefully for not bad reasons. But this idea of intention that the AI has, I mean, it's not the same way that a human operates. It's not thinking I'm looking to deceive you. It's looking at what is the right sounding thing to tell you, but no idea whether it's right or wrong. One of the best ways to see it is you ask some of these tools to do math, and they're terrible at math because they don't really understand you want it to do math. It wants you to say the right thing back, right sounding thing back when someone asks the math thing, which might be you want to hear numbers, so it'll tell you numbers, but it may not be the right numbers for the math thing that you're asking it.

Denise Pope (06:52):

But I hear that it's getting better though, right? So eventually it's going to be able to do math and not make mistakes. Yes or no?

Victor Lee (07:00):

I mean, we should expect some improvements for sure, but there's going to be ways in which while it gets better, we're going to be changing the kinds of things that we're doing. And so the demand is always going to be changing. The bar will continuously get moved, so we'll always count on it to get better, but we also as humans are going to be getting better too. And that's going to change what better is supposed to be.

Dan Schwartz (07:21):

How do you teach this? I'm teaching an 8-year-old not to trust the computer, even though it sounds so trustworthy. How do you do this?

Victor Lee (07:31):

Well, I mean, it's good to have conversations if it's with an 8-year-old or if it's in school. Just to say a little bit about how this works, that this is a computer system which is not human. It's trained on information that it happens to have available, and people can draw on their own intuitions and experiences that not all the information's right? So it's not an expert on a lot of things. Making that kind of known as an expectation is really good. And also teaching what are the kinds of things we want to do when we use AI? We want to cross check and verify with somebody who does know and look at real sources of expertise. We want to be clear in terms of what parts AI has contributed to some of the work that we've been doing.

Dan Schwartz (08:15):

So Victor, you have a project called CRAFT that's explicitly going into the high schools. Say a little bit about what you're trying to do and how.

Victor Lee (08:22):

Yeah, I mean, one of the things that we are all struggling with is everyone's got questions about AI. They want to know what kind of things this is going to do to our world, and that's going to appear in a lot of different ways, whether you're going to be a scientist or work in business or work for a nonprofit. So with CRAFT, what we're doing is we're working with teachers from around the country to build out lessons and resources that they can use, whether they're an art teacher, English teacher, math teacher, science teacher in high school, and bring this as a topic to make as an open discussion in class to see a little bit about what's going on under the hood. And how can we think about using this responsibly for all the different kinds of work or activities that we expect our young people to be doing as they grow up.

Denise Pope (09:09):

And I know there's a pretty cool lesson on there about AI and bias or issues around bias with the technology. Can you just say a little bit about that? Because I think that's part of the hoopla that parents are nervous about and that teachers are nervous about is we're hearing, well, it could be promoting some unfair biases.

Victor Lee (09:35):

For sure. Yeah, with AI and any sort of new technology, it lets things happen faster. And for a bias and bias to sort of be put out there faster and in larger amounts is kind of a scary thought. But the bias that we do see is because AI is trained on data and the data that we have can be really incomplete. So the data are only trained on people who sound like Dan Schwartz and then it hears Denise Pope speaking. It may not be able to understand what Denise Pope is saying just because it only is used to what Dan Schwartz can say. Well, think about that with all the images that are being used on the internet or the ways that people write or talk. And so those are some of the concerns. And because they can make things go fast and it can get around so quickly, it raises this risk of bias amplifying at a rate that we just would not be comfortable with.

Denise Pope (10:18):

I have a question for you, Dan.

Dan Schwartz (10:18):

Okay.

Denise Pope (10:26):

So AI keeps getting better and better. It keeps getting to do more and more things. And so my question to you is... And I'm serious, don't mock me. Do you think there's going to be a time where AI will fully replace a human being?

Dan Schwartz (10:42):

Wow, maybe I have two answers. One is glib and one is more serious. So I've thought about this, could AI fully replace me? So suddenly what that means is that there would be another Dan Schwartz, and suddenly I meet myself and we duke it out for which Dan Schwartz gets to actually be in the world. No, I don't think it's going to replace me.

Denise Pope (11:10):

No?

Dan Schwartz (11:12):

No, no. I think it's going to be more augmentation. So this is my favorite story about this, where people use the AI to surpass themselves what they could do on their own. And I think that's a likely model for it or it's a model I'd like to see. And the example is last year I happened to be at several retirement parties and the person who organized the retirement party would read a poem, and this happened in three separate parties. The organizer read this poem and they would say, ChatGPT wrote this poem with me.

(11:48):

And they were so proud. They were so proud of their poem. I think the person who's retiring was thinking, you couldn't buy me a Hallmark card. You couldn't even go to that much trouble, you just had the computer write a poem. But the person who wrote the poem was incredibly proud. And that's sort of when I got the clue that they don't see this as cheating. They don't see this as replacing, they see this as augmenting. It enables them to do things they couldn't do before. And so I think that's the kind of vision I like for AI in education.

Denise Pope (12:17):

Yeah, I mean, it makes sense. I know there's a famous line around AI that AI is used a lot in radiology. And there's a line that soon AI is going to replace the radiologists. And what a lot of people say now is no, radiologists who use AI to augment what they do, exactly what you said, to surpass or augment what they do are going to replace radiologists who don't use AI. So that's kind of the line that keeps me sort of sane. Like, okay, we still need people, right? We still need people.

Dan Schwartz (12:48):

No, the AI is going to replace things that can be automated. It's very good at little tasks. It'll be very good at that. But I do think it's an interesting question about what a future world with AI looks like, particularly with respect to education.

(13:10):

So let me switch a little bit. So a student comes to you and says, "Professor Lee, I don't need to learn how to write anymore. The AI is going to do it for me." What do you say back Denise? You like hit them up on the side of their head and say, English literature and writing's a beautiful thing?

Denise Pope (13:30):

You know me so well, Dan. No, I'll say, "Go talk to Victor Lee." I want to hear Victor's response because I... First of all, in my own courses, my grad students ask, can I use ChatGPT to help me write a paper, to help me complete this assignment? Nobody has ever come up to me and said, why do we need to write at that level? But I have had high schoolers and focus groups and middle schoolers and focus groups saying, if AI can do this, why do we need to learn it in school? And what's the purpose of school if AI can do all of these things? So I'm really interested to hear Victor's answer.

Victor Lee (14:06):

Yeah. For one thing, I mean, that's sort of an all or nothing thinking. And really what's going to happen is AI is going to kind of lead to a bit of a reallocation or how much we emphasize some of these things. So here's a quick example. On my computer, I use a grammar editing tool, Grammarly, if you've had experience with it. And it helps me when I use too much academic jargon or I have typos, which is quite often. And the question there is, do I need to learn my grammar or do I need to learn how to spell because Grammarly is going to take care of it for me? And I would say yes. We don't want to have Grammarly having to be constantly changing everything. And we have to be able to think through and make a judgment as to whether or not that recommended change is the right one.

(14:53):

Now, does it mean that we need to spend all this time doing those grammar trees and figuring out all the nuances and drilling, spelling and making it so high stakes? Probably not. But it is important to be able to tell when you're using your, you're, yore or there, their, or they're, but it wouldn't necessarily matter that we now have tools that can help make these things move faster. It's not a bad thing to have the AI around there, but it is important we know and are able to make good judgment and let it do the thing that it's good at, which is fixing up some of our typographical errors. And help us do the things we're good at, which is coming up with interesting and creative ideas.

Denise Pope (15:32):

But I know that ChatGPT also comes up with what could be considered creative ideas, right? Dan, you could write a poem. So are you cool if your students are like using that for creativity?

Victor Lee (15:47):

You know it really can be. I mean, where does creativity come from? We want to think it's all just exactly one person's head. So if you put somebody in a room with nothing on the wall and no surfaces, we'd hope, oh, can they be creative and do something amazing? But even at say, the Stanford Design School, they put people in these big, loud, colorful rooms with toys and objects and post-it notes and paper because actually having those things helps you to be more creative. It helps you to get out more ideas. So ChatGPT is going to be pretty similar. If I have a thought partner, if I have something that can offer something that's a little bit off the wall that I can improvise with, that could actually be a great creativity amplifier. So again, it's not an all or nothing, it's just going to sort of change the emphases. And if we do this well, we're going to have it be that we even get better products or better outputs as a result.

Denise Pope (16:37):

So I have then a question about equity. Because some people see this as a great leveler. Right now, everybody has the ability to write or sound like a writer or be creative. And other people are saying, no, this is just going to exacerbate things more because the haves, the people who have technology are going to be able to use it, and there's a bunch of people who don't. So what are your thoughts on people who are worried about the equity issue here?

Victor Lee (17:02):

Well, I mean, as a school of education, I think this is a really good conversation because schools are going to be one of the main ways that we make sure that there's access available to everybody. And that's one of the important things that we want to keep in mind is that if we just flat out ban this, we're not preparing kids for future world. And we also are not making it possible for kids who won't have easy access to this because maybe just family members aren't as familiar or they don't necessarily have the same technology available at home that they won't have the chance to learn it themselves. So I think that's one way to address those equity concerns. And if we do think that this is going to be a big part of future life, future economy, then making sure that everyone has equal starting point in terms of knowing what you can do with this and what its potential is and where its limits are, would probably be one of the most important ways that we could address equity.

Dan Schwartz (17:59):

Can I take a shot at this Denise?

Denise Pope (18:00):

Yeah, go for it, Dan.

Dan Schwartz (18:02):

I like Victor's response. I don't think it's just a question of access. It's also a question of what do people have access to? And so I think a lot of our curriculum is designed for the average, and so kids with special... They have learning differences or English is not their first language. The hope is that these large language models have captured enough of human experience. They can actually personalize towards things that aren't just right down the middle average, and so that it could actually access, but also access of quality that's tuned to the needs of particular learners, which has always been a challenge for us. So I think that's one possible help. It's not just a question of access, it's also a question of what are they getting access to?

Victor Lee (18:51):

Absolutely.

Dan Schwartz (18:52):

So in our remaining minutes, I want you to put on your VR goggles and imagine the future. What is the classroom of the future going to look like? Kids just doing the same thing, but they have a little personal assistant on their cell phone saying, "no, dummy, don't do it that way. Do it this way." What's it going to look like?

Victor Lee (19:10):

I like to think it's going to look a bit more of Star Trek-like where we're going to have some pretty cool technologies that can bring things really quickly into the classroom space in ways that we just have not been able to do in the past, whether it's a holographic simulation that interacts and talks back with you. If you want to sort of see a reenactment of some big historical moment, you could bring that quickly to a class and you don't have to rely on the field trip in the same way.

(19:39):

We may get ways that would pair up students in all different sorts of groups to help them really learn as much as they can from each other and learn to collaborate in interesting new ways and have really powerful tools that in the course of a school day, maybe they've already developed a whole new app just within morning to daytime because these tools are so powerful. And in the course of that, learning a whole bunch of amazing new stuff about what that app address is or how apps work or the questions that they have themselves in terms of their own personal interests.

Denise Pope (20:11):

Can I just say I had a little glimpse of the future yesterday? This is true. In my class, one of my students brought the headset in, and I don't even know what this does. He put it on, it's like white big goggles. He looked like a human fly. And he's looking around the room and looking at... And none of my other students knew what was going on. I didn't know what was going on. And I thought, if this is the wave of the future, it's a little bit freaking me out. Like I can't make eye contact with this kid, right? So are we all going to be in goggles so that we can make eye contacts? It was a weird experience. Let me just say that. What are your reactions?

Dan Schwartz (20:52):

That is a weird experience. I would go a different direction. I think the amount of time that students spend creating, making, producing will increase a tremendous amount. So I've always thought computer science is kind of a privileged domain for teaching because students make things and then they get feedback, they get to see how it works, how people use it. I think the new technologies will make it possible to do that in every discipline so that kids, for example, could design an ecosystem and see if it actually works based on the science.

(21:24):

So I think there could be a huge change in pedagogy that allows students to basically own the means of production to help them create and produce. My big fear, of course, is that we'll just use AI to be really efficient at teaching in the ways we always have, which aren't perfect. But my vision in the classroom of the future is kids are doing a lot more projects because the AI can help manage the class. Project-based learning is great, but it's tough on the teacher. Now the AI can keep track for them. Kids can be creating things, simulations, and so forth.

Denise Pope (21:59):

It sounds like a lot more professional development. That's what I think.

Victor Lee (22:02):

Yeah, for sure.

Dan Schwartz (22:03):

It'll be like YouTube. Every teacher's got a projector, uses YouTube. It's that easy.

Denise Pope (22:10):

Okay, you're coming back... Victor's going to come back in a year or two and report back. Because I think you're oversimplifying.

Victor Lee (22:17):

Well, maybe I'll just send my AI to report back on my behalf.

Denise Pope (22:21):

Yes. And you'll be talking to the fake Dan. You'll be talking to the AI Dan when you come back. Oh, my goodness. Well-

Victor Lee (22:26):

Sort of surreal to imagine just our AI doppelgangers are going to be having all the conversations, and it's just our AI talking to other people's AI. And what are we doing? Probably sipping drinks at the beach.

Dan Schwartz (22:37):

Yeah, that was the image I had. I agree with you, Victor.

Denise Pope (22:40):

No. That freaks me out, people. No.

Dan Schwartz (22:43):

You're going to be skiing, instead-

Denise Pope (22:45):

No, I'm just like, did you not see that movie where everybody's a couch potato because all of their AI people are doing all their AI thing and we just sit back and become couch potatoes?

Victor Lee (22:54):

Well, that's why we like those movies. They give us cautionary tales to sort of remind us that isn't the future that we want. Although I would enjoy some time on the couch and a nice drink every now and then.

Denise Pope (23:04):

That sounds really nice, doesn't it? I would like some time alone on a couch with a drink in hand too. But I don't think we can be full-time, couch potatoes, maybe part-time, couch potatoes. That would be really nice. Victor, thank you so much for joining us today. We learned so much.

Dan Schwartz (23:20):

Yes, thank you, Victor. That was fascinating, insightful.

Denise Pope (23:23):

It's really clear that AI has this great potential. It's still a little scary, still a little daunting. I'm not going to lie to you, but you really helped us understand the landscape and all that's at stake here.

Dan Schwartz (23:35):

Yeah. You've given us a lot to think about, especially when it comes to preparing our students and the educators for the changes AI is going to bring to schools and society. So Denise, let me put you in the hot seat. What's your big takeaway?

Denise Pope (23:49):

Oh, my gosh, there's so many takeaways. One thing that I am going to really remember and think more about is that AI can be used to spark and push us creatively as humans. But we really need to teach students how to critically evaluate the information that AI is providing. Just because something sounds like it's right, doesn't mean it is always right.

Dan Schwartz (24:11):

No, that's nice. I think that's well said. I personally like the creativity aspect. I don't think people should only view AI as scary or as replacing humans or cheating on tests for students. I think there's a lot of room for people to be very productive and creative with AI.

Denise Pope (24:27):

Totally. And I think we have another thing that I don't want us to overlook, which is the equity issues here. Because as AI becomes more integrated into education, we need to really make sure that all students are benefiting from these tools, and that's not always the case. So Victor, it's a lot to navigate, but I know there's a ton of potential. I can't wait to bring you back in a few years to see how it's evolved.

Dan Schwartz (24:50):

Denise, months, everything will have changed. Our AI robots are going to meet up and have a chat on this topic for us and let us know.

Denise Pope (24:59):

Oh, no. Okay. Well, we'll see. You never know. In any case, thank you again to Victor and thank you all for joining us on this episode of School's In. Remember to subscribe to our show on Spotify, Apple Podcasts, or wherever you tune in. I'm Denise Pope.

Dan Schwartz (25:15):

I'm Dan Schwartz. I'm a synthetic AI robot who serves education for everyone.


Faculty mentioned in this article: Dan Schwartz , Denise Pope , Victor R. Lee