Companies like Amazon and Facebook have systems that constantly react to how users interact with their apps to help the user experience. What if educators could use the same “adaptive experimentation” strategy to regularly improve their teaching materials?
This is the question posed by a group of researchers who have developed a free tool they call the Adaptive Experiment Accelerator. The system, which leverages AI, recently won first place in the annual competition XPrize Digital Learning Challengewhich has a $1 million purse split among the winners.
“At Amazon and Facebook, they quickly adjust terms and change what their viewers see to try to quickly understand what small changes are most effective, and then deliver more of those changes to the audience,” says Norman Bier, director. from the Open Learning Initiative at Carnegie Mellon University who worked on the project. “When you think about it in an educational context, it…really opens up the possibility of giving more students the kinds of things that better support their learning.”
Bier and others involved in the project say they are testing the approach in a variety of educational settings, including public and private K-12 schools, community colleges and four-year colleges.
EdSurge sat down with Bier and another researcher on the project, Steven Moore, a doctoral student at the Carnegie Mellon Institute for Human-Computer Interaction, to learn more about their bid to win the XPrize for Education and about what they see as the challenges and opportunities to be exploited. AI in the classroom.
The discussion took place at the recent ISTE Live conference in Philadelphia in front of a live audience. (EdSurge is an independent newsroom that shares a parent organization with ISTE. Learn more about EdSurge’s ethics and policies here and sympathizers here.)
EdSurge: The app you developed helps teachers test their learning materials to see if they are effective. What’s new in your approach?
Norman beer: If you think of the standard A/B testing [for testing webpages], they usually work on averages. If we want to average everything, we are going to have populations of students for whom the intervention that is good for everyone is not good for them individually. One of the real benefits of adaptive experimentation is that we can begin to identify “Who are these subgroups of students?” “What are the specific types of interventions that are best for them? then we can offer them and in real time continue to give them the intervention that suits them best. So there is a real opportunity, we believe, to better serve students and to really approach the notion of experimentation more fairly.
I understand that one aspect of this is what is called “seeking learners”. What is this?
Steven Moore: The concept of learner sourcing is similar to crowdsourcing, where a large number of people are involved. Think of the game show “Who Wants to Be a Millionaire?” when candidates poll the public. They ask the audience, ‘Hey, there are four options here. I don’t know which one, what should I choose? And the audience says, “Oh, go for choice A.” It’s an example of crowdsourcing and the wisdom of the crowd. All these great minds are coming together to try to get a solution.
So the search for learners is an approach to that, where we actually take all of this data from the students in the courses – in these massive open online courses – and we collect their data and get them to do something for us that we can then reject in the course.
One example in particular is asking students taking, say, an online chemistry course, to create a multiple-choice question for us. And so, if you have a class with 5,000 students and everyone chooses to create a multiple-choice question, you now have 5,000 new multiple-choice questions for that chemistry class.
But you may be wondering how good are these? And honestly, it can vary a lot. But with all this wave of ChatGPT and all these great language models and natural language processing, we are now able to take these 5,000 questions and improve them and find out which are the best ones that we can actually take and use in our course instead of just blindly rejecting them in the course.
Beer: We ask students to write these questions not because we are looking for free labor, but because we believe it will actually help them as they develop their own knowledge. Also, the kind of questions and feedback they give us helps us to better improve the course material. We feel from a lot of research that a novice perspective is actually very important, especially in these lower level courses. And so quite implicit in this approach is the idea that we take advantage of this novice perspective that students bring, and that we all lose as we gain expertise.
How does AI play a role in your approach?
Moore: In our XPrize work, we definitely had a few algorithms that feed the backend that takes all the student data and basically runs an analysis to say, “Hey, should we give this talk to student X?” So AI was definitely a big part of that.
What is a scenario of how a teacher in a classroom would use your tool?
Beer: The Open Learning Initiative offers a statistics course. It’s an adaptive course – think of it as a high-tech interactive textbook. And so we have thousands of students at a university in Georgia using this statistics course instead of a textbook. Students read, watch videos, but most importantly, they intervene, answer questions and get targeted feedback. And so in that environment, we’re able to introduce these learner research questions as well as some approaches to try to motivate students to write their own questions.
Moore: I have a good example from one of our pilot tests for the project. We wanted to see how we could engage students in optional activities. We have all these great activities in this OLI system, and we want students to do extra stat problems and so on, but no one really wants that. And so we want to say, ‘Hey, if we can provide a motivational message or something like, Hey, keep going, like five more problems and you know, you’ll learn more, you’ll do better on those exams and tests.’ How can we adapt these motivational messages to get students to participate in these optional activities, whether it’s looking for learners or simply answering multiple-choice questions?
And for this XPRIZE contest in our pilot test, we had some motivational phrases. But one of them involved a meme because we thought maybe some undergrads for that particular course would like that. So we put a picture of a capybara – it’s kind of like a fat hamster or a guinea pig – sitting in front of a computer with headphones and glasses on, no text. We’re like, ‘Let’s go and see if it gets students to do it. And for about five different conditions, the photo of the capybara with headphones in front of a computer led to more students participating in the activities that followed. Maybe it made them laugh, who knows the exact reason. But compared to all those motivational messages, it had the best effect in this particular class.
There is a lot of excitement and concern about ChatGPT and the latest generative AI tools in education. Where are you two on this continuum?
Moore: I’m definitely playing both sides, where I see a lot of cool progress, but you should definitely be super hesitant. I would say you still need a human eye on the output of the generative AI you’re using. Never blindly trust what is given to you – always put a human eye on it.
I would also like to say that plagiarism checkers for ChatGPT are terrible right now. Please don’t use them. They are not fair [because of false positives].
Beer: This notion of the human in the loop is really a feature of the work that we do at CMU, and we’ve thought strategically about how to keep that human in the loop. And that’s a bit at odds with some of the current hype. There are people rushing to say, “What we really need is to build a magical tutor who can provide direct access to all of our students who can ask him questions.” There are a lot of problems with that. We all know about technology’s tendency to hallucinate, which is compounded by the fact that a lot of learning research tells us that we like things that confirm our misconceptions. Our students are the least likely to challenge this bot if it tells them things they already believe.
So we tried to think about the deeper applications of that and ways to use those applications while keeping a human in the know? And there are many things we can do. There are aspects of content development for things like adaptive systems that human beings, no matter how good, hate to do. As a courseware designer, my faculty authors hate writing questions with good comments. It’s just not something they want to spend their time doing. So we’re excited to provide ways for these tools to start giving them first drafts that are still being revised.
Listen to the full conversation on this week’s EdSurge podcast.