ChatGPT, a newly popular artificial intelligence program, can help teachers plan their lessons and grades. But it can also help students cheat and present many other challenges, leading districts across the country — including the New York City Department of Education — to ban the software altogether.
“These are tough decisions for schools to make, but they shouldn’t be made out of fear,” TC said. Lalitha VasudevanVice-Dean for Digital Innovation and Head of Digital Future Institutein an interview with the Washington Post. “They should be done as part of improving student learning.”
We followed Vasudevan and other education technology experts from across the TC community to weigh in.

Paulo Blikstein, associate professor of communications design, media and learning technologies; Lalitha Vasudevan, Associate Dean for Digital Innovation and Director of the Digital Futures Institute; Ezekiel Dixon-Román, professor of critical studies in race, media, and education, and director of the Edmund W. Gordon Institute for Urban and Minority Education; Renzhe Yu, Assistant Professor of Learning Analytics/Educational Data Mining; Ellen B. Meier, practice teacher; and lecturer Jin Kuwata. (Photos: TC Archives)
Is ChatGPT untenable for schools?
For some, maybe yes – but the complex advancements of ChatGPT may just provoke new questions and answers needed about the future of education.
“The danger is that [educators] start relying on these tools before they have deep knowledge of teaching and classroom management and lesson planning design,” TC said. Paulo BliksteinAssociate Professor of Communications, Media and Learning Technology Design, Told Education Week. “The danger is that technology drives education and not the other way around.”
But education has often been wary of advances in technology in the past, Vasudevan notes of the graphing calculator – which has become the staple of high school math lessons around the world. Educators may not need to work against ChatGPT, but rather learn to minimize the risks it introduces and seize the opportunity to critique its use in education.
“Exploring the question of what it means to be a good collaborator with machines reintroduces the idea of what it means to teach, learn, the relationships we have, and where and how we focus our efforts,” says TC. Jin Kuwata, who also teaches in CMLTD. “I see these kinds of tools not just as things that accomplish tasks, but as entities that we learn to work in partnership with to enrich our understanding of the world.”
But what about cheating issues?
OpenAI, the program’s parent company, is already work about accompanying software that can help teachers detect ChatGPT-generated text.
This, along with building trust between students and teachers, could be key. For Blikstein, the tacit social contract between an educator and a student is based on a mutual “human” investment in working together. AI tools complicate this.
“If we start using AI for both parties – student and teacher – these kinds of ethical contracts need to be reconsidered and made transparent to both parties,” Blikstein explained. “It gets this weird learning environment [where] people no longer trust each other.
Additional tools and confidence for the future, of course. But for now, can school districts rely on bans to limit academic dishonesty?
Many districts have simply blocked the platform from school-provided devices and networks, but – as with any prohibited entity – demand and use likely persists, especially if students can still access the platform at from other networks and Internet devices.
“Banning is a realistic short-term political reaction, when we are uncertain of the tool’s value and risks, but it is not a longer-term solution,” TC says. Renzhe Yuassistant professor of Learning Analytics/Educational Data Mining, who encourages her students to try out ChatGPT themselves to explore potential uses.
“It is the researchers who are responsible at this point to work with teachers to understand how best to integrate these AI tools to leverage its power, to better help students master knowledge, literacy in AI and other kinds of things.”

(Photo: iStock)
What other implications does AI pose for education?
Several, to make a long story short. Here are some of the main concerns:
- AI is automating more professions and changing career preparation in schools – “What happens to the future of coding and computer science education when generative AI like ChatGPT can write and run code?” said TC Ezekiel Dixon-Roman, professor of critical studies in race, media and education and director of the Gordon Institute for Urban and Minority Education. “The social and educational implications are significant. These forms of technological affordances will not only impact the teaching and learning processes, but also the school reproduction of the social through who is trained to use versus who is trained to design the systems, ultimately framing the discourse of knowledge. ”
- AI defending and promoting racist structures – Racial prejudices and colonial structures are supported by algorithms on the Internet, including in AI programs, explains Dixon-Román. “We all need to take a step back and think about what these mediums can do,” Dixon-Román says, noting that AI programs like ChatGPT collect and process information from the internet with weak and limited safeguards. “Not only is ChatGPT limited to pre-2021 internet data, but it inherits much of the existing colonial logics of racial hierarchies that insidiously and explicitly permeate the internet and social media. It’s not so much in terms of identity and representation, but more so in its inability to engage in reflection and critique of given data, leaving it open to racialized framings, characterizations and interpretations.
- AI driving misinformation – “What [generative AI tools] produce as an answer in no way guarantees whether the information is true or not,” says Yu. “And because [these programs] learn a lot about how people express themselves, most of their productions will follow how human beings communicate in a very confident tone, even if it’s fake. This phenomenon, Yu says, further highlights long-standing calls for students to develop their media literacy to help them navigate misinformation on the internet.
What happens next?
“We can’t put the genie back in the bottle. [This software] is an opportunity to engage more of our teaching, more of our educators in understanding the power and potential of technology,” says TC Ellen B. Meier, teacher of practice. “If we don’t think of new educational opportunities, we end up doing more transactional teaching in which information is just presented to students and they don’t necessarily learn.”
So changing the way students are taught – even if it’s through such innovative programs as AI – is just another chapter in the rigorous journey of improving education.
“If the things we used to put so much effort into teaching can be automated, then maybe we should rethink the actual goals and experiences we should be working towards in the classroom,” says Vasudevan.
So what does this process look like? “We have a real chance to say, ‘Let’s center the voices and experiences of students and teachers around their concerns and interests, and that’s how TC can play a role,'” says Vasudevan, who focuses on this work with his colleagues at TC Institute’s Digital Futures — which will soon release a guide for educators and create “open spaces for discussion” to demystify and engage with AI tools.
“We are still in the early days of this consumer-facing AI, but our daily lives have been saturated with this type of AI for some time… Now we have the opportunity to change that a bit to help teachers be inquirers in the same way that they would ask for a text for a topic,” says Vasudevan. “The key question now is how can we improve the context in which people are brought – so that students and teachers are partners in making decisions about how to use these tools rather than just users.”