Ömer Can Kuşcu
Class of 2023Aydın, Aydın
- "Examining the Human-Like Proficiency of GPT-2 in Recognizing Self-Generated Texts" with mentor Efthimios (Oct. 15, 2023)
Examining the Human-Like Proficiency of GPT-2 in Recognizing Self-Generated Texts
Started Aug. 3, 2023
Abstract or project description
In recent years, the widespread use of generative language models has brought opportunities as well as some philosophical and technical questions. GPT2, a language model with 1.5B parameters, is an open-source language model provided by OpenAI. Our aim in this paper is to utilize the classification capabilities of GPT2 to create a new perspective on the question of whether language models show some kind of consciousness/self-awareness, in addition to technical questions such as how to detect the misuse of the outputs of language models.
To investigate this phenomenon, GPT-2ForSequenceClassification model was fine-tuned on TuringBench datasets and its performance was examined. In addition, the accuracy achieved by model as a result of training with training sets of different sizes, as well as its performance in human-machine discrimination, were evaluated.
The model exhibits consistent and above-average performance in identifying GPT-2-generated content compared to its classification accuracy in distinguishing other machine-generated text from human writing. This performance of the model in understanding self-generated texts is very similar to people's ability to recognize their own writing, and these results offer an interesting perspective on the self-awareness of artificial intelligence. Additionally, the model showed high accuracy in distinguishing machine generated output from human output, even when trained with very few examples.