“So when people get lazy and [say], ‘Hey, write this stuff down for me,’ then take it and use it, there might be some mistakes,” Schneider said. This makes it a valuable tool for generating ideas and writing drafts, but a risky option when using it for final work. Students who decide to use ChatGPT will likely need to verify that the information it provides is correct, either by knowing the information first or by confirming with other trusted sources.
ChatGPT can support teachers, not replace them
For some educators, ChatGPT also sounds the alarm that the widespread adoption of AI could lead to job losses, particularly in areas such as tutoring and language teaching. Schneider said that was unlikely. “I can’t imagine a school system that doesn’t have teachers,” he said. Many studies show a correlation between strong student-teacher relationships and increased student engagement, attendance, and academic achievement.
As people explore how AI will support teaching and learning, teachers’ roles may change as these technological tools become more widely used. “Teachers are going to have to evolve and figure out how to harness the power of this tool to improve teaching,” Schneider said. For example, the AI Institute for Transforming Education for Children with Speech and Language Processing Challenges, which received $20 million in funding from IES and the National Science Foundation, is studying how ChatGPT can help speech-language pathologists. According to a recent American Speech-Language-Hearing Association survey, the median number of students served by a speech-language pathologist is 48. “There just aren’t enough pathologists in schools,” Schneider said. ChatGPT has the potential to help speech therapists with paperwork, which takes nearly six hours a weekand develop personalized treatment plans for students with cognitive impairments, such as dyslexia.
“We need to rethink what we can do to free up teachers to do the job they’re really good at and how to help them individualize their interventions and provide instruction and support,” Schneider said.
When you use ChatGPT, your data is not secure
ChatGPT is compelling because it references massive amounts of data and identifies patterns to generate text that looks like it was written by a human. It can even mimic the writing style and tone of the person using it. “The more data they have, the better the model,” Schneider said, referring to ChatGPT’s ability to generate responses. “And there’s tons of data floating around.”
The information users put into ChatGPT to generate an answer – also called input – can be in the form of a question, a statement, or even partial text that the user wants ChatGPT to complete. But when students use ChatGPT, they can put their data at risk.
According to the Open AI privacy policy, entries – including those containing personal information, such as names, addresses, phone numbers or other sensitive content – may be reviewed and shared with third parties. Additionally, there is an ever-present risk that if ChatGPT is hacked, a bad actor could gain access to user data.
Schneider acknowledged that while ChatGPT will be used to support teaching and learning, privacy is a major concern. “We are developing much better methods to preserve privacy than in the past,” he said. “We have to remember that this is a bit of a cost analysis. Using all this data has many advantages. It also carries certain risks. We have to balance them. He added that ChatGPT is similar to wearing an Apple Watch or talking to an Amazon Alexa, as these tools also rely on user data.
Banning ChatGPT is not a long-term solution
Since students can enter original prompts in ChatGPT and get unique responses, this begs the question: Does ChatGPT plagiarism? And to what extent should the AI-generated text be modified until it is seen as a students’ own work? Instead of answering these questions, some schools, including districts in Los Angeles, New York And Seattlehave chosen to simply ban the use of ChatGPT.