Some brave Harvard University faculty members are organizing to protect free speech at the University and I wish them every success. It's long overd...
Wily students are increasingly using artificial intelligence (AI) generated language processing programs to fastrack classwork. Some teachers require students to hand write their essays and claim using generative language programs is plagiarism. But is it?
One such AI program released late last year called ChatGPT – has become a hit for students and a nightmare for teachers.
Since its introduction late last year, ChatGPT added users faster than TikTok and was soon scooped up by Microsoft, which intends to sell the technology to schools to create their own chat bots. With ChatGPT now behind a paywall, students are finding other free products on the market, such as Caktus while Google announced its ChatGPT competitor, Bard. Facebook and Instagram owner Meta has plans to bring generative AI to their products as well.
As generative AI proliferates, parents, teachers, and politicians are left struggling to figure out the parameters around tools we barely understand.
How it Works
Students type a query on any topic into a field, or a chat bot, as if in a conversation. The program is trained on widely-available information and then in seconds, the program uses AI to generate an essay.
Here’s what it looks like when my 15-year-old asked ChatGPT to “write me an essay on Rome”:
Ask a better question and you’ll get a richer answer.
A student with a baseline understanding of the classics and attention to current politics would be able to frame a question to generate an essay with more context. For example, the question: “How does the overreach and eventual collapse of Rome’s government resemble the decline of the U.S.?” requires a human to choose the lens of analysis.
While the instinct to limit access to generative AI is understandable – especially on the academic front – the question we should really be asking is should this new technology be regulated and what exactly do we want to prevent?
AI in Education
A responsible approach is to think about how AI tools can enable people to do more, and how teachers can integrate them into existing education settings and products going forward.
Parents and teachers are naturally skeptical. All sorts of AI programs are already applied in education to facilitate classwork, homework, and to proctor exams. And some see potential to use this AI technology to help level the playing field for special needs students, such as those with communication disabilities or for students with executive function issues.
New technologies like this are often the subject of fear-based messaging. Kara McWilliams, head of the ETS Product Innovation Labs, warns against neo-Luddism, saying:
“We really need to embrace advanced technologies in education. Remember when the calculator came into play and there was a big fear about using it? I’m of the mind that AI isn’t going to replace people, but people who use AI are going to replace people.”
Wharton Professor Ethan Mollick encourages students in his entrepreneurship courses to engage with AI tools like ChatGPT as an emerging skill. The guidance he provides to his students encourages use of the tool to complement their exploration of AI generative tools, and to know their limitations. “Be aware of the limits of ChatGPT,” he says:
“If you provide minimum effort prompts, you will get low quality results. You will need to refine your prompts in order to get good outcomes. This will take work.”
As a mom, I hope my high schooler will manage to adopt the time-saving tools generative AI offers with an attitude of personal responsibility. My son can use this extra time to deepen his understanding of history in order to “refine the prompt” as Mollick suggests.
Or empty the dishwasher.