On January 18, 2023, Louis Oliphant, Ph.D. presented "ChatGPT and AI Assistants: How they work and why they're important"
The news is lighting up with talk about ChatGPT changing the face of student writing. Louis Oliphant, Ph.D. unpacks ChatGPT and other AI Assistants. How do they work? Why is it important that we talk about them with students? Why is banning them from our classrooms shortsighted?
On February 1, 2023, Courtney Mauck, Ph.D. and Garrett Munro presented "ChatGPT in the Classroom"
How do you talk about ChatGPT and other AI assistants with students? What should you include in your syllabus? How can you modify assignments in the ChatGPT era? Courtney and Garrett will share their experiences thus far.
From the Grobe article:
"In hiding the seams of its own relentless pattern replication, ChatGPT uses many of the same tricks I teach my students to avoid as warning signs of insufficient argument. For instance, it exclusively uses transition words like “Another,” “Additionally,” and “Over all” [sic] to start its paragraphs, which may lend an air of structure to the essay but in fact provide no logical connection between adjacent ideas. (Why “another” example — and “another” example of what, exactly?) Then, it couches every claim in ambiguous hedge words like “most,” “often,” “many,” and “some,” which ask the reader to do the writer’s work by deciding for themselves how limited or broad each claim was actually meant to be.
These problems, and more like them, are caused by what is currently the most obvious shortcoming of ChatGPT: its inability to cite and use evidence in anything resembling the way we require in the interpretive humanities. When prodded to cite specific evidence, it supplies a slightly narrower generalization. Even when supplied with specific evidence relevant to its arguments, it cannot do the work of connecting the one to the other. That’s because it is not actually dealing with facts about the world, but with the proximity of various clusters of words in a hugely multidimensional language model. It can endlessly move through the layers of that model and around each layer’s clusters of keywords, but it cannot get below these words to the facts they represent."