By: Cassidy Delamarter, University Communications and Marketing
A USF assistant professor is utilizing results from his new study on artificial intelligence chatbots to rework how he鈥檚 assigning homework in his applied linguistics course.
鈥淔or instance, instead of asking students to produce short summaries on different readings, something that ChatGPT can do quite easily, I'm giving my students more integrated, hands-on assignments that ask them to blend traditional academic readings with personalized, experiential learning projects,鈥 said Matthew Kessler, assistant professor in the Department of World Languages.
For example, in one of the assignments, Kessler鈥檚 students will be required to use a mobile app of their choice to learn a foreign language and document the ways they use the app to immerse themselves in the language for five weeks.
These changes are inspired by Kessler鈥檚 new research just published in the ScienceDirect journal that revealed even experts from the world鈥檚 top linguistic journals have difficulty differentiating AI from human-generated writing.
Working alongside J. Elliott Casal, assistant professor of applied linguistics at The University of Memphis, Kessler tasked 72 experts in linguistics with reviewing a variety of research abstracts to determine if they were written by AI or humans.
鈥淲e thought if anybody is going to be able to identify human-produced writing, it should be people in linguistics who鈥檝e spent their careers studying patterns in language and other aspects of human communication,鈥 Kessler said.
The findings revealed otherwise. Despite the experts鈥 attempts to use rationales to judge the writing, such as identifying certain linguistic and stylistic features, they were largely unsuccessful with an overall positive identification rate of less than 39 percent.
鈥淲hat was more interesting was when we asked them why they decided something was written by AI or a human,鈥 Kessler said. 鈥淭hey shared very logical reasons, but again and again, they were not accurate or consistent.鈥
Based on this, Kessler and Casal concluded ChatGPT can write short genres just as well as most humans, if not better in some cases, given that AI typically does not make grammatical errors.
The silver lining for human authors lies in longer forms of writing. 鈥淔or longer texts, AI has been known to hallucinate and make up content, making it easier to identify that it was generated by AI,鈥 Kessler said.
USF sophomore Max Ungrey is taking Kessler鈥檚 course through the Judy Genshaft Honors College. He says he occasionally uses ChatGPT before raising his hand with a question in class.
鈥淥bviously ChatGPT is detrimental if used as a replacement for learning, rather than a tool. I鈥檝e absolutely noticed changes in school due to ChatGPT, both due to my own use and the use of others,鈥 Ungrey said. 鈥淚 can certainly see myself using ChatGPT and other language models in day-to-day work. For example, some web browsers have built in AI which can summarize large bodies of text, and I think I will end up using tools like that.鈥
Just like what Ungrey is experiencing, Kessler hopes this study will start a bigger conversation to establish the necessary ethics and guidelines surrounding the use of AI in research and education.