The Future of Everything podcast

The future of educational technology (from The Future of Everything)

This week we are sharing an episode from The Future of Everything, with host Russ Altman, featuring GSE dean and School’s In co-host, Dan Schwartz in the hot seat.
September 4, 2025
By Olivia Peterkin

While there have been many suggestions for AI use in classrooms — from grading papers, to improving learning outcomes in less time — it can be difficult to know where the tool’s true possibilities and limitations lie.

Muddying the waters, according to Stanford Graduate School of Education (GSE) Dean Dan Schwartz, is the disconnect between how edtech companies are using the tool, and what teachers and students are looking to create with it.

“What’s happening in the industry is [that they’re] using the AI in the way that nobody else uses it,” Schwartz said. “Everybody who’s got this tool wants to create with it. Like my brother. It’s my birthday. What does he do? He asks ChatGPT to write a poem about Dan Schwartz at Stanford.

“Meanwhile, the [edtech] field is trying to push towards efficiency. Can we get the kids done faster? Can we get ’em through the curriculum faster? Can we correct them faster? In which case the kids are going to optimize for being really efficient. As opposed to just trying to be creative, innovative, and use it for deeper kinds of things. This is my big fear.”

This week we’re switching it up and sharing an episode from Stanford School of Engineering’s The Future of Everything podcast, originally broadcast in August 2024, with host Russ Altman. In a role reversal, GSE Dean Dan Schwartz is the featured guest and he discusses the future of educational technology.

“The question is, how can, how can you take advantage of industry? You know, education’s a public good, but they still buy all their products,” Schwartz said. “And so going through those companies is one way to sort of bring a positive revolution.” 

[00:00:00] Denise: Hi everyone, it's Denise Pope and this week on School's In we're gonna do something a little different. While our team is supporting the opening of the new Stanford GSE building, we wanted to share an episode from another great Stanford podcast, the Future of Everything hosted by Russ Altman from the School of Engineering.

Russ sits down with my absolute favorite co-host, Dan Schwartz, to talk about all things artificial intelligence. The conversation originally aired last summer, and it's just as relevant today, if not more. So they explore what AI might mean for students, teachers, schools, everything from grading papers to how we can actually make learning stick.

You'll hear Russ leading the conversation, and it might sound a little more tech forward than our usual episodes, but at the heart of it, it's still about how we teach and learn. We are so excited to share it with you. So let's dive in. 

[00:00:54] Dan: You know, the tough question for me is, should you let the kid use chat GBT during the test?

Yeah. Right. And, and we had this argument over calculators, right? And, and finally they came up with ways to ask questions, where it was okay if the kids had calculators because the calculator was doing the routine stuff, and that's not really what you cared about. What you cared about was could the kid be innovative?

Could they, uh, take us another, a second approach to solve a problem, things like that.

[00:01:27] Russ: This is Stanford engineering's the future of Everything, and I'm your host, Russ Altman. You know, the rise of AI has been on people's minds ever since the release of chat, GPT, especially the powerful one that started to do things that were scary. Good. We've seen people using it in business, in sports, in entertainment, and definitely in education.

When it comes to education, there are some fundamental questions. However, are we teaching students how to use AI or are we teaching students? How do we assess them? Teachers grade papers with ai. Can students write papers with ai? Why is anybody doing anything? Why don't we just have the AI talk to itself all day?

These are real questions that come up in ai. Fortunately, we're gonna be talking to Dan Schwartz, who's a professor of Education and a dean of the School of Education at Stanford University about how AI is impacting education. Dan, the release of chat, GPT has had an impact. All over the world. People are using it in all kinds of ways and clearly one of the areas that AI and especially generative AI has made impact is in education.

Students are clearly using it. Teachers are thinking about using it or using it. You are the Dean of Education at Stanford. What's your take on the situation right now for AI in education? 

[00:02:49] Dan: Okay, so lots of answers to that, but, but you know, the thing I've enjoyed the most is, uh, showing it to people and watching their reaction.

So I'm a cognitive psychologist. I study creativity, learning what it means to understand, and you show this to people and you just see them go, oh my Lord. And then the next thing you see is they begin to say, uh, what? What's left for humans? Like what's left? And then they sort of say, wait a minute, will there be any jobs?

And then finally they sort of say, oh my goodness, education needs to change. And as a dean who raises money for a school, this is the best thing ever happened. No. Whether it's good or bad, it doesn't matter. Everybody realizes it's gonna change stuff. And so it's really an exciting time. 

[00:03:38] Russ: So that, that is really good news, I have to say, going into this.

And I, and, and I have to reveal a bias. I have often wondered if technology has any place in a classroom, and I think it's because I was, uh, I was injured as a youth. This is in the 1970s when some teachers tried to put a computer program in front of me and I was a pretty motivated student and I worked with this computer for about six minutes.

And I should say I'm not an anti computer person. I literally spend all my time writing algorithms and doing computational work, but I just felt as a youth. That I wanted to have a teacher in front of me, a human telling me things. Uh, and so that is clearly not the direction. I, I hear you laughing. So talk to me about the appropriate way to think about computers, because I really have a big negative reaction to the idea of anything standing between me and, and a teacher.

[00:04:34] Dan: You must have had very good teachers. I might have. So, so Ross, you sound like someone who doesn't play video games. 

[00:04:39] Russ: I do not play video games. Yeah. So, 

[00:04:41] Dan: so there's this world out there where people get to experience things they could never experience, uh, directly, and no teacher can deliver this immersive experience of you.

In the Amazon searching for anthropological artifacts, there's also something called social media. That people I've heard about this. Yeah. Yeah. 

[00:05:00] Russ: I think we disseminate the show using it. 

[00:05:02] Dan: So, so back in the day. 

[00:05:04] Russ: Okay, so I'm a dinosaur. 

[00:05:06] Dan: Uh, back in the day you got the Apple two maybe, and it's about 64. Okay.

Maybe it's got a big floppy drive and it takes all, its CPU power to draw a picture of a two plus two on the screen. So, so I think things have changed a little bit less they have, but, but I appreciate your desire to be connected to teachers. I, I, I don't. I don't think we're replacing them. 

[00:05:30] Russ: I am not gonna give you a lecture about teaching, but I will say this one sentence that was reverberating through my brain when I was getting ready for our interview, which was when I'm in a classroom, and this has been since I've been in third grade, I am watching the teacher trying to understand how they think about the information and how they struggle with it.

To like understand it and then try to relay it to me. And so it is, that's where I'm learning I'm, it's not even what they're saying, it's they're painting a picture for their cognitive model of what they're talking about. And that's what I'm trying to pull out to this day. And so that's why I have such a negative reaction to anything standing between me and this other human who has a model that is more advanced than mine about the material that we're.

Struggling with, and I just, I'm trying to download that model. 

[00:06:17] Dan: Wow. You're, you are a Cognos psychologist, Russ. I mean this like, like I had a buddy who sort of became a Nobel laureate and he talked about how he'd loved to take apart cars and I'd say, I love to watch you take apart cars. Right. Just figured out what you're doing.

No, so I, I think I'm gonna, okay, let, let, let's separate this. There's the part where you think the interaction with the teacher's important. I don't know that you need it eight hours a day. Yeah. You know, that's, that's an awful lot of interaction. I, I'm not sure I wanna be with my, yeah. My mom and dad for eight hours a day trying to figure out their thinking.

So you don't need it all the time. On the other side, you know, we can do creative things with a computer. So for example, I wrote a program where students learn by teaching a computer agent. And so they're trying to figure out how to get the agent to think the way it should in the domain. Turns out it's highly motivating.

The kids learn a lot. The problem was the technology. Quickly became obsolete because after kids used it for a couple of days, they no longer needed it. 'cause they'd figured out sort of how to do the kind of reasoning that we wanted them to teach the agent to do for reasoning. 

[00:07:23] Russ: That's exactly what I was talking about before about my relationship with my teacher and you just flipped it.

But it's the same idea, which is that there's a cognitive model that you're trying to transfer and by, by doing that transfer, you get in, you introspect on it, and you understand what it is that you're thinking about. 

[00:07:38] Dan: I think that's right. You know, so, so the concern is the computer does all the work.

Right, right. And, and, and so I'm just sitting there pressing a button that isn't relevant to the domain I'm trying to learn. But, you know, uh, one of the things computers are really good at, I like, as good as casinos, is motivation. So some computer programs, they gamify it. I'm not sure that's a great use of it.

Because you, you know, you try and you learn to just beat the game for the reward. Right, right. As opposed to learn the content. But things like having have teaching a, an intelligent agent how to think, there's something called the protege effect, which is you'll try harder to learn the content. To teach your agent, then you will to prepare for a test.

[00:08:21] Russ: Ah, 

[00:08:21] Dan: right. So, so we can make the computer pretty social. 

[00:08:24] Russ: Okay. So, so you are, you are clearly a technology optimist in education and in addition to the amazing fundraising and like, there's so many questions to be answered. What I think a lot of people are worried about is, are we at risk of losing a gen we've already lost.

A few generations of students. Some people argue because of the pandemic and the terrible impact it had, especially on, on people who weren't privileged in, in society and, and in their education. Are we about to enter yet another shock to the system where because of the ease of having essays written and having and grading papers, that we really don't serve a generation of students well?

Or do you think that's a overhyped unlikely to happen thing? 

[00:09:07] Dan: No, it's a good question. You know, that I, part of this is people's view about cheating, you know, and, and so it's, it's too easy for students to do certain things. I, there's another response that I wanna hang on to. I, I wanna ask you, Ross. Yeah.

Are you using, you teach. 

[00:09:24] Russ: Yeah. 

[00:09:24] Dan: Are you, are you like putting in all sorts of rules to prevent students from cheating or are you saying sh use it, do whatever you can. I'm gonna outsmart your technique anyway. 

[00:09:33] Russ: It, it's a little bit more on the ladder. So we, uh, I teach an ethics class, which is a writing class, and we allow chat, GBT, because the, my fellow instructor and I decided, and this was the quote, we want to be part of the future, not part of the past.

So we said to the students, knock yourself. 

[00:09:49] Dan: Sorry. The future of everything, Ross. Thank you. 

[00:09:51] Russ: Thank you. Thank you, Emma. Thanks for the plug. So, uh, we allow it, we ask them to tell us what prompt they used and to show us the initial output that they got from that prompt. And then we, of course, have them hand in the final thing and we instruct the TAs and, and ourselves when we grade.

We're grading the final product with or without a declaration of whether chat GPT is used. We do have engineers as TAs, which means that they did a, a careful analysis. Students who used chat GPT, and I don't think this is a surprise, got slightly lower grades, but spend substantially less time on the assignment.

So if you are a busy student, you might say, I will make that trade off. 'cause the grades weren't a ton worse. It was like two points out of a hundred, like from a 90, 90 to an 88, and they completed it in like half the time. 

[00:10:42] Dan: Uh, do, do you think they learned less? 

[00:10:44] Russ: So? We don't know. We don't know. And, uh, yeah, the evaluation of learning is something that I'm looking to you, Dan.

Uh, yeah. Yeah. How do I tell? So, um, so we do try to use it, but we are stressed out. We have seen cases where. People say they used chat, GPT, but tried to mislead us in how they used it. They said, I only used it for copy editing, but it was clear that they did more than copy editing with it. Yeah. And so there's, at the edges there are some challenges, but in the end we said motivated students who wanna learn will use it as a tool and will learn.

And the students who we have failed to motivate and it is our failure, you could argue they're just gonna do whatever they do and we are not gonna be able to really impact that trajectory very much. 

[00:11:28] Dan: Yeah, you know, you, you sort of see the same thing with video, video based lectures. So I'm online, I've got this lecture.

Do I really want to sit and listen to the whole thing? Not really. I'm gonna skim forward to find the information. I skim back. I'm probably gonna end up doing the minimum amount if it's not a great lecture. 

[00:11:45] Russ: Yeah. 

[00:11:46] Dan: So I, I'm not sure this is a chat GPT phenomenon. It's just, it's sort of an enabler. I think the challenge is thinking of the right assignment.

[00:11:54] Russ: Yep. Yep. So, 

[00:11:56] Dan: so like you can grade things on novel and appropriateness. So are they novel? You know, if they use chat GBT like everybody else, they won't be novel. They'll all produce the same thing. 

[00:12:05] Russ: It's incredibly, yes. It, so it is, um, there's the, the most common type of, uh, moral theory is called common morality.

And it turns out that. Chatt PT does pretty well at that one 'cause there's so many examples that it has seen and it's terrible at KT Deontology. It really can't do. Okay, so let me, let me, let me Wait, wait. 

[00:12:24] Dan: So, yeah, so let me get, let get back to your question. Yeah. So, so here's what I see going on right now.

There, there are like, uh, big industry conferences. Because they're gonna, they're producing the technology that schools can adopt, right? And there's a lot of money there. And 20 years ago there were zero unicorns in about, uh, I think last year, $54 billion valuation companies in ed tech. So this is a big change.

So what are they doing? They're, they're basically creating things to. Do stuff to students, right? So, so maybe they're marketing to the teachers, but it, it's, you know, it's, I'll make a tutor that, uh, is more efficient at delivering information to the students. Or, uh, I will make a program that can correct their math very quickly.

And so what's happening is the industry is sort of using the AI in the way that nobody else uses it. 'cause everybody's got this, this tool wants to create stuff, right? Like, uh, my brother. It's my birthday. What does he do? He has chat, GPT to write me a poem about Dan Schwarz at Stanford. What he doesn't know is that there's a lot of Dan Schwartz's, and so evidently I wear colorful ties, but this is what everybody wants to do.

They wanna create with it. Meanwhile, the field is trying to push towards efficiency. Can we get the kids done faster? Can we get 'em through the curriculum faster? Can we correct them faster? In which case the kids are going to optimize for being really efficient. Yeah. Right. As opposed just trying to be creative, innovative, use it for deeper kinds of things.

So, so this is my big fear. 

[00:14:00] Russ: And so, and so you're watching these companies, and I'm guessing that they don't always ask your opinion about what's, what would you tell, so let's say a, uh, one of these unicorn, billion dollar or more companies comes to you and says, we wanna do this, right? We want to use the best educational research to create AI that can bring education to people who might otherwise not have quality education.

What would you tell them? 

[00:14:22] Dan: So this is a challenge, right? This, this is something we're actively trying to solve. So we, we've created, uh, Stanford Accelerator for learning to kind of figure out how to do this, because I've been, I've been in this ed tech position for quite a while, and the companies come in and they say, we really want your opinion.

Mm-hmm. And then they present what they're doing and I go, uh, have you ever thought of? And they go, wait, wait. Wait, let me finish. And this goes on for 55 minutes where they're telling me what they want to do. And I'm trying to say, you know, if you just did this, and the way it ends is I say to 'em, look, you, you, you, if you do these three things, I'll consider being an advisor.

[00:15:00] Russ: Right? 

[00:15:01] Dan: They never come back. 

[00:15:03] Russ: So the message, the message you're sending them is just not in their worldview. 

[00:15:08] Dan: It's 'cause they have a vision. Everybody wants to start their own school. 

[00:15:11] Russ: Yeah. 

[00:15:11] Dan: They have their vision of what it should be and, and they're urgent to get it done. And, you know, it's a startup mentality.

So trying to figure out how can we educate them, you know, I think we know a lot about how people learned that, uh, that we didn't know 20 years ago when they went to school. And the ai, you know, one of the things it can do is implement some of these theories of learning in ways that don't exist in textbooks and things like that.

So, so that's, that's the big hope. And the question is, how can, how can you take advantage of industry? You know, education's a public good, but they still buy all their products. And so going through those companies is one way to sort of bring a positive revolution. But again, I'm, I'm a little worried that the companies are, they're, they're sort of optimizing for local minimum.

You know, to, to accommodate the current schools and things like that. 

[00:16:03] Russ: Should, should we take, so what should we take solace in the teachers? So many of us are fans of teachers, grammar school teachers, middle school teachers, high school teachers. Many of these folks are incredibly de dedicated. Will they be a final, um.

A final filter that looks at these, uh, educational technologies and says, absolutely not. Or, yeah, we'll use that, but we're gonna use that in a way that makes sense for my, for my way of teaching. Or are they not in a position to make those kinds of, what would you could call courageous decisions about kind of modifying the use of these tools to make them as good as possible, uh, in on, on the ground.

[00:16:39] Dan: Great expression, courageous decisions. I really like that. So it's pretty interesting, the, the surveys I've seen, uh, sort of over the last year, the different groups do different surveys. It, it's sort of, I take the average about 60% of K 12 teachers are using Gen ai, right? And about 30% of the kids, if I go to the college level, about 30% of the faculty are using Gen AI and teaching.

And about 80% of the kids are using it. So, so I do think in the pre-K to 12 space, yeah. Uh, the teachers are making decisions. They do a lot of curriculum. There are, so a great application is, uh, project-based learning. So pro project-based learning is a lot of fun. Kids learn a lot. They sort of develop a passion, a certain d.

As opposed to just mastering sort of the requirements, but it's really hard to manage. You know, when I was a high school teacher, I had 130 kids, right? If all of them have a separate project, I have to help plan 'em and make 'em goal, you know, learning goal appropriate. So the gen AI can help me do that. It can help me, uh, have the kids sort of help use it to help them design a successful project.

Uh, it can help me with a dashboard that helps manage them hitting their milestones, things like that. And, and there, you know, it's, it, the teacher is like, I can do something I just couldn't do before. 

[00:17:56] Russ: Yeah. Yeah. 

[00:17:57] Dan: It, it's different than the model where you put the kids in the back of the room who finished early and say, go use the computer.

[00:18:03] Russ: Right. 

[00:18:04] Dan: But I think, you know, uh, most schools, kids are carrying computers in classes, so it's a little different. It's more integrated than it used to be. 

[00:18:12] Russ: This is the future of Everything with Russ Altman. More with Dan Schwartz next.

Welcome back to the Future of Everything. I'm Rus Saltman and I'm speaking with. Dan Schwartz, professor of Education at Stanford University. In the last segment, Dan told us about AI education, some of the promises and some of the pitfalls that he's looking at on the ground, thinking about how to educate the next generation.

In this segment, I'm gonna ask him about assessment grading. How do we do that with AI and how do we make sure it goes well? Also gonna ask 'em about physical activity, which turns out physicalness is an important part of learning. I wanna get a little bit more detail, Dan, in this next segment, and I want to start off with a assessment.

Grading. I know you've thought about this a lot. People are worried that, um, AI is gonna start to doing, be doing all the grading. Everybody knows that a high school teacher with a big couple of big classes can spend their entire weekend grading essays. It is so tempting just to feed that into chat GPT and say, Hey, how good is this essay?

How should we think about, maybe worry about, but maybe just think about assessment in education in the future. 

[00:19:32] Dan: Yeah, this was, this was, uh, remember the MOOCs? Yes. Massively online open courses. Yes. And, uh, you're hoping you have 10,000 students and then you gotta grade the papers for 10,000 students. So what do you do?

You give 'em multiple choice test, which can be machine coded. Right. So, so I, I think that's always there. I'm, I'm gonna take it a slightly different direction, which is, uh, I'm interacting with a computer system and while I'm interacting with it, it's, it can be constantly assessing. In real time, right? Huh?

And so there's a field that's sometimes called educational data mining or learning analytics, and there's thousands of people who are working on how do I get informative signal out of students' interactions. Like, are they trying to game the system? Are they reflecting and so forth. So, so this is something the computer.

It can do pretty well, right? It can sort of track what students are doing, assess, and then ideally deliver the right piece of instruction at the moment, right? So, so yours, you could use the assessments to give people a grade, but really the more important thing is can you use the assessments to make instructional decisions?

So I think this is a, a big area of advancement, but here, here's my concern. We've gotten very good at assessing things that are objectively right and wrong. Like, did you remember the right word? Did you get two plus two correctly? For most of the things we care about now they're like strategic and heuristic, which means it's not a guaranteed right answer.

And so what you really want to do is assess students' choices for what to do. So, for example, uh, creativity, it's, it's just, for the most part, it's a large set of strategies. 

[00:21:11] Russ: Mm-hmm. 

[00:21:11] Dan: Right? There's a bunch of strategies that help you be creative. The question is, do the students choose to do that? Or do they take the safe route?

'cause creativity is a risk, right? Because you're not sure. So I, so I think this is where the field needs to go, is being willing to say that certain kinds of choices about learning are better than others. Uh, and, and it's a, it becomes a more of an ethical question now. Instead of saying two plus two equals four, there's no ethics to it.

[00:21:36] Russ: Are you gonna be able to convince non-educators who hold purse strings, let's call them the government, that these kinds of assessments are, are important and need to be included? Because my sense is that when it filters up to boards of education or elected leaders, a lot of that stuff goes out of the window and they just wanna know.

How good are they at Reading Comprehensive And can they do enough math to be competitive with, you know, country X? 

[00:22:04] Dan: Yeah. Yeah. So different assessments serve different purposes, like the the big year end tests that kids take. Those aren't to inform the instruction of that child. They're not even for that teacher, they're for school districts to decide are our policies working?

And so it's, it's really a different kind of assessment than me as a teacher trying to decide what should I give the kid next? So, so, so I think it's gonna vary. I, you know, the tough question for me is, should you let the kid use chat GPT during the test? Yeah. Right. And, and we had this argument over calculators.

Right. And, and finally they came up with ways to ask questions, where it was okay if the kids had calculators because the calculator was doing the routine stuff, and that's not really what you cared about. What you cared about was, could the kid be innovative? Could they, uh, take a, another, a second approach to solve a problem?

Things like that. 

[00:22:55] Russ: Yeah, we, so I teach another class where it's a programming class, so pe the students write programs and we have switched. Um, and we've actually downgraded the, the value. So as you know very well, just as background, there is now an amazing chat. PT can also write computer code essentially.

And so a lot of coding now is kind of done for you and you don't need to do it. We are trying to make sure that they understand the algorithms that we ask them to code, and so what we're doing is where. Downgrading the amount of points you get for working code, you still get some, but we're upgrading the quiz about how the algorithm works.

Do you understand exactly why this happened the way it did? Why is this data structure a good choice or a bad choice? And so it's forcing us and, and you could have argued that we should have done this 20 years ago in the same class, but this is making it a more. Urgent issue because if we don't, people can just get an automatic piece of code.

They can run it, it'll work. They have no understanding of what happened. And so it's really a positive. It's putting more of a burden on us to figure out why the heck did we have them write this code in the first place? 

[00:24:00] Dan: No, this, uh, this, this was my point. It makes you sort of rethink what, what is valuable to learn and you stop doing what was easy to grade.

[00:24:08] Russ: Right. And, and, 

[00:24:09] Dan: uh, so, so I have an interesting one. This, this is a little nerdy. Okay. I love it. I love it. So I teach, I teach the intro PhD statistics course, okay. In education. And lots of students say, I took statistics. Right. And, and I'm sort of like, well, that's great. I, let me ask you one question and I say, I'm gonna email you a question and you'll have five minutes to respond.

You let me know when you're ready for it. And I ask him, uh, this is just for you, Russ, but why, why is the tail of the T distribution fat at small sample sizes? And I, what I get back usually is because they're small sample sizes. 

[00:24:45] Russ: Right. Or because it's the T distribution 

[00:24:48] Dan: or it's the Yes. Even better. And then I come back and I sort of say, well, have you ever heard of a standard error?

And I, I began to get at the conceptual stuff, right? And, uh, I, I suspect if I gave. So there are ways to get conceptual questions that are really important. 

[00:25:04] Russ: This is great. But, 

[00:25:05] Dan: you know, being able to, being able to prompt or write our code, you know, that's a good thing. You, you want them to learn the skills as well.

[00:25:11] Russ: Exactly. 

[00:25:12] Dan: So I, so I don't know, you know, when the calculator showed up, there's a big debate, right? What should students learn? Can they use the calculator? The, the apocryphal solution was you had to learn the regular math and the calculator. Now you just had to learn twice as much. Yes. And so maybe that's what it's gonna be.

[00:25:28] Russ: And, and, and, and so I, that's a very likely transitional strategy. And then we'll see where we end up. Okay. In, in the final few minutes, I, this seems like it's unrelated to ai, but I bet it's not. You've done a lot of work on physical activity and learning. You've even been on a paper recently where, where you talk about having a walk during a, during a teaching session, and whether you get better outcomes than if you were just.

Standing or sitting. So tell me about that interest and tell me if it has anything to do with today's topic. 

[00:25:58] Dan: I can make the bridge. I can do it, Russ. Right. So we, we did some studies, um, I've done a lot of it. It's called embodiment. Embodiment where? Yeah, there, it, I got clued into this where, uh, I was asking people about why.

About gears and I say, you know, you have three gears in a line, and if you turn the gear on the left clockwise, what does the gear on the right do far? Right? Right. And I'd, I'd watch 'em and they'd go like this with their hands. They, they'd model with their hands. And then, uh, I, I, I was sort of like, well, what's the basis of this?

And I'd say, well, why, why? And they say, because this one's turning that way, that one I go, but why? And in the end, they just bought 'em out. They just show me their hands. They didn't say things like one molecule displaces another. Right. Right. So that, that sort of clued me in that two body. This pinky 

[00:26:44] Russ: is going up and this other pinky is going down.

[00:26:47] Dan: Yes. What don't you understand about that? Pretty much. Well, it was non-verbal. Yeah. So, so we went on, you know, we discovered that the basis for negative numbers right is actually, uh, perceptual symmetry. Huh. And we did some neuro, neuro stuff. And so, so the question is sort of how does this perceptual apparatus, which some people the, we're just loaded with perception, right?

The brain's just one giant perceiving. So how, how do you get that going? So part, part of the embodiment is my ability to take action, right? And so this is where we started, right? Right now. The AI feels very verbal, very abstract. Even the video generation, it's amazing, but it's pretty passive for me. So enter virtual worlds.

They're still working on the form factor where I can move my hand in space. 

[00:27:37] Denise: Yeah. 

[00:27:37] Dan: And something will happen in the environment in response to that. You know, I think medicine has, you know, really been working on haptics so surgeons can practice. Uh, there was a great, a great guy who made a virtual world for different heart congenital defects, and you could go in and practice surgery and see what would happen to the blood flow.

So, so I think, uh, that embodiment where you get to bring all your senses to bear. It's not just words, but it's everything can really do a lot for learning, for engagement. Uh, not just physical skills. 

[00:28:10] Russ: So that's a challenge to, I'm hearing a challenge to ai, which is, as an educator, you know, that this physicality can be a critical part of learning.

And by the way, would this be a surprise? I mean, we're, we've been on earth evolving for several hundred million years and, uh. Would not, you would not, you would be surprised if our ability to manipulate and look at three dimensional situations wasn't critical to learning. And yet that's not what AI is doing right now.

So this is a clear challenge to AI among other things. 

[00:28:38] Dan: Right. So, uh, I have a colleague, uh, Renata Ter, and, uh, she teaches architecture and she has students make a blueprint for the building. Right, and then she feeds the blueprint to a CAD system that creates the building. She then takes the building and puts it into a physics engine.

It can basically render the building and make walls so you can't move through 'em, and it has gravity and things like that. She then puts. The original student who designed the building in a wheelchair and has them try to navigate through that environment, at which point they sort of understand, oh, this is why you need so much space so they can turn around so they can navigate near the door.

I am sure that is an incredibly compelling experience that allows them to be generative about all their future designs. So, so yeah, this is a challenge and part of the. The co-mingling of the AI in the virtual worlds. I think this, this is a big challenge. It's computationally very heavy, but it will open the door for lots of ways of teaching that you just couldn't do before.

[00:29:38] Russ: Thanks to Dan Schwartz, that was the future of educational technology. You've been listening to the Future of Everything in I'm Russ Altman. You know what? We have an archive with more than 250 back episodes of the future of everything. So you have instant access to a wide array of discussions that can keep you entertained.

And informed. Also, remember to rate, review, and follow. I care deeply about that request. And also if you wanna follow me, you can follow me at on X at RB Altman, and you can follow Stanford Engineering at Stanford, ENG.