Some professors have already started working with the students on how to approach the artificial intelligence program. David Mick, the Robert Hill Carter Professor of Marketing at UVA’s McIntire School of Commerce, studied the application and added a paragraph to his syllabus that cautioned the students from claiming the AI’s work as their own.
“I prohibit the use of these AI tools to produce content that they can then put into an exercise or a term paper, which they have to produce in my class, and then hand it in as though it was their work,” he said during the workshop. “I stressed the words ‘content’ and ‘text.’ You cannot generate content and text, and then use it as if it were yours.
“However,” he said, “if you can use these tools to find sources of insights, which you can then use – tables, figures, opinions or whatever – that you would quote, you can do that and then put the reference the way you would do any other time you write a paper, so that we know the source of where that idea came from.”
Sarah Ware, co-director of Legal Research and Writing at the School of Law, thinks the application could be used in drafting legal documents.
“There is a lot of legal writing on the internet that AI could pull from,” Ware said. “If it becomes a reliable tool for drafting, it could change how lawyers work. And that could change how we teach – thinking about how to use it skillfully, how to use it ethically. But it’s not there yet. I’ve played with it a bit. The results looked good, but were shallow and sometimes wildly wrong.”
Pennock said there are beneficial uses for the AI applications, such as creating PowerPoint presentations and simplifying complex technical text and distilling it down to more easily understood prose.
“How many of our students find the readings, the journal articles, we give them confusing?” said Pennock, who noted that an application named “Explainpaper” allows a user to “upload a paper, highlight the text that you find confusing, and it will give you a plain English version.”
Pennock highlighted a page of journal text, 537 words, and the AI program reduced it to 170 words.
“Students can read much more quickly as they read this,” Pennock said of the shorter version. “And you can imagine if students work together, they could create very short versions that are maybe dumbed down, but very quick reads of the journal articles that we’re asking them to engage in. It's missing all the nuance that I find helpful and engaging and wonderful about this article. But to a student who’s entering the conversation without a lot of scaffolding, this might allow them to get there without the nuance.”
“[ChatGPT] can draft emails for you, it can draft the meeting minutes,” Palmer said. “However, the application doesn’t know anything about you. So anytime you ask students to reflect on their own experience, it’s not going to be able to do that. So far, ChatGPT knows nothing, or very little after 2021. That will change. At some point, it will be almost instantaneously updated.”
Mick questions the ethics of working with artificial intelligence applications since they learn through interaction.
Historian Jennifer Sessions agrees.
“I feel very torn because we have a responsibility to learn about what this thing is and how it works because some of our students will be experimenting with it,” Sessions said during the workshop. “And I am I resentful that there’s no way to do that without contributing to it.”
Sessions said she was also concerned about data privacy and the opacity of the entities and business models behind these tools.