Academic Misuse of LLM AI

From November 2022 to my graduation in April 2023, I worked with Professor Robert Loy in Grand Canyon University's Research and Development Program to explore how generative writing AI could affect and disrupt academia.

Professor Loy and I used different models of generative writting AI to create dicussion posts, papers, and other text-based assignments. This included AI tools from vendors such as Open AI's ChatGPT and Rytr. We then tested these generated answers against AI-detectors as well as humans to see if they could detect it

Screenshot of our powerpoint deck showing humans correctly choosing that a discussion question is written by AI
The first round of testing with humans, before attempting to have AI write more human like

Although at first, the research seemed to show that humans could partially detect whether an AI wrote discussion posts, we soon found that we could trick humans into thinking that humans were AI and vice versa.

By inputting a set of instructions to the AI to write like a 8th grader and then specifically finding student answers that were written very intelligently, we found that humans are inclined to believe that something is written by AI the more complicated and academic it was.

Even AI writing detectors fell to the same strategies. I could write out an answer to a question, without the use of AI, and be detected properly as a human. I could then rewrite my answer, adding more academically complicated language, using words like "therefore" and "consequently." I would then be detected improperly as an AI.

Last updated