Impact | Teaching | Technology
Stanford faculty weigh in on ChatGPT's shake-up in education
The recent release of ChatGPT — a new natural language processor that can write essays, spit out a Haiku, and even produce computer code — has prompted more questions about what this means for the future of society than even it can answer, despite efforts to make it try.
Faculty from the Stanford Accelerator for Learning are already thinking about the ways in which ChatGPT and other generative artificial intelligence will change and contribute to education in particular.
Victor Lee, associate professor of education and the faculty lead for the accelerator initiative on generative AI in education, stresses the importance of educators in harnessing this technology. “If we want generative AI to meaningfully improve education,” he says, “there is the obvious step we need to take of listening to the existing expertise in education — from educators, parents, students, and scholars who have spent years studying education — and using what we learn to find the most pertinent and valuable use cases for generative AI in a very complicated educational system.”
Over the next several weeks, the Stanford Accelerator for Learning will launch listening sessions and gatherings with educators to strategize a path for generative AI. Says Lee, “We need the use of this technology to be ethical, equitable, and accountable.”
Here are some initial thoughts from accelerator faculty on the possibilities and risks of generative AI in education.
What’s next for high school essays and writing?
“Teachers are talking about ChatGPT as either a dangerous medicine with amazing side effects or an amazing medicine with dangerous side effects. When it comes to teaching writing, I'm in the latter camp.
“First, ChatGPT may help students use writing as a tool for thinking in ways that students currently do not. Many students are not yet fluent enough writers to use the process of writing as a way to discover and clarify their ideas. ChatGPT may address that problem by allowing students to read, reflect, and revise many times without the anguish or frustration that such processes often invoke.
“Second, teachers can use the tool as a way of generating many examples and nonexamples of a form or genre. Often, teachers have the resources and bandwidth to find or create one or two models of a particular kind of writing — say, a personal narrative about a family relationship. As a result, students may come to believe that there is only one way to write such a narrative. ChatGPT allows teachers to offer students many examples of a narrative about family where the basic content remains the same but style, syntax, or grammar differ. With many examples to compare and analyze, students can begin to see the relationship between form and content. They can develop criteria for what makes a strong piece of writing, or how one verb might affect readers differently than another. For teachers, designing instruction has just become much easier — ChatGPT is essentially a tool for creating contrasting cases, and most teachers will be delighted that ChatGPT is doing a lot of the legwork for them.
“Obviously, teachers are less delighted about the computer doing a lot of legwork for students. And students still need to learn to write. But in what way, and what kinds of writing? A third side effect of this new medicine is that it requires all of us to ask those questions and probably make some substantive changes to the overarching goals and methods of our instruction.”
— Sarah Levine, assistant professor of education
What will it mean for college admissions?
“There is some consternation in the admissions space about these technologies, and with obvious good reason. In one recent Twitter thread, someone posted an AI-generated essay and the results of an informal study showing that over half of admissions officers identified it as not being computer-generated. With SAT/ACT test score usage waning in many admissions sectors, the narrative portions of college applications may receive additional emphasis in evaluation of merit and deservingness. This was our worry when we found the content of admission essays to be more strongly correlated with income than are SAT scores.
“AI complicates this space immensely, though in what direction policy-wise, it’s hard to say. My best guess is that access to the technology will make its use in admission essays more prevalent among lower-socioeconomic status households. Why? Because wealthier folks, as they’ve shown in the past, are quite savvy and will know that (1) places like ETS [Educational Testing Service, which develops standardized tests for K-12 and higher education] are already working on algorithms to accurately detect AI-written essays; and (2) anything available to the masses is something to not only avoid but to counter with a more exclusive strategy. That might look like writing non-standard essays — poetry or a mini-screenplay, for example — or something else. The drive for maintaining social distinction and its attendant privilege is quite strong. And there certainly will be a for-profit cottage industry rising up to meet the demand to help richer families in their quest. Things are moving fast, though, and perhaps at such a speed that technology’s potential democratic effects do surface in this space.”
— Anthony Lising Antonio, associate professor of education
We need to remember that language, even from ChatGPT, is deeply linked to culture and cognition
“The innovation centers the capacity to replicate and, in some cases, enhance how human intelligence emerges in dialogue. On its merit, this advancement has the potential to improve how software supports students’ learning through rich, computer-generated dialogue. This is an incredibly important technological advancement that must understand the cognitive and cultural benefits of dialogue as an educational tool. To replicate dialogue without an understanding of the cultural and cognitive benefits of dialogue runs the risk of centering a singular cultural lens: that of the designer.
“Dialogue serves many purposes. Social science research indicates that dialogue represents cultural membership, gender identification, and group membership broadly. Said differently, how something is said sends multiple messages. On one level all dialogic communications send a message of content. The message shares an idea. On another level a message sends a message of belonging and identity. How the message is communicated sends a cue of who the message is for and who the speaker is. This subtle intersection of language cues and language identities embeds a message in every dialogical exchange. So, artificial intelligence must embed the power of cultural cues in its communicative pathways. They are already there. How something is said sends a message of who the speaker expects to be.
“From my cognitive perspective, dialogue serves as both an assessment tool and a tool for developing mastery. It is vital that the AI developers create opportunities for students to explain their way toward expertise, to use artificial intelligence for feedback and corrective support, while explicitly ensuring all students are able to receive cues of cultural belonging. In thinking this way, all kids may benefit from AI technologies if developers do the important work of centering the intersection of language, culture, and cognition.”
— Bryan A. Brown, professor of education
What about opportunities for kids with disabilities?
“In the disability space, I've been having conversations about (a) how we could use AI to code videos of teachers and other instructors to coach on instructional practices that have been demonstrated to be useful for kids (i.e., providing opportunities to respond; corrective feedback); and (b) ways AI could possibly help us develop smarter tutoring that is responsive to students’ needs. There seem to be a lot of opportunities.”
— Chris Lemons, associate professor of education
What will students need to know now?
“We have a glimpse of new things that are going to be built with generative AI. What do we need students to know and understand about how these are built, how they work, and the costs and benefits (financial, ethical, environmental, social) of different technologies for different visions of what education is supposed to do? As a first step, we need to seriously examine how generative AI is changing how different fields and disciplines do their work and what ideas students need to develop to both build and use AI for humans rather than in place of humans.”
— Victor Lee
Faculty mentioned in this article: Victor R. Lee , Bryan Brown , Sarah Levine , anthony lising antonio , Christopher J. Lemons