Show simple item record

dc.contributor.authorIdialu, Oseremen Joy
dc.contributor.authorMathews, Noble Saji
dc.contributor.authorMaipradit, Rungroj
dc.contributor.authorAtlee, Joanne M.
dc.contributor.authorNagappan, Meiyappan
dc.date.accessioned2024-03-07 15:16:50 (GMT)
dc.date.available2024-03-07 15:16:50 (GMT)
dc.date.issued2024-04-15
dc.identifier.urihttps://2024.msrconf.org/
dc.identifier.urihttp://hdl.handle.net/10012/20384
dc.description.abstractArtificial intelligence (AI) assistants such as GitHub Copilot and ChatGPT, built on large language models like GPT-4, are revolutionizing how programming tasks are performed, raising questions about whether code is authored by generative AI models. Such questions are of particular interest to educators, who worry that these tools enable a new form of academic dishonesty, in which students submit AI-generated code as their work. Our research explores the viability of using code stylometry and machine learning to distinguish between GPT-4 generated and human-authored code. Our dataset comprises human-authored solutions from CodeChef and AI-authored solutions generated by GPT-4. Our classifier outperforms baselines, with an F1-score and AUC-ROC score of 0.91. A variant of our classifier that excludes gameable features (e.g., empty lines, whitespace) still performs well with an F1-score and AUC-ROC score of 0.89. We also evaluated our classifier with respect to the difficulty of the programming problem and found that there was almost no difference between easier and intermediate problems, and the classifier performed only slightly worse on harder problems. Our study shows that code stylometry is a promising approach for distinguishing between GPT-4 generated code and human-authored code.en
dc.language.isoenen
dc.publisherMining Software Repositoriesen
dc.relation.ispartofseries21st International Conference on Mining Software Repositories;
dc.relation.urihttps://zenodo.org/records/10153319en
dc.subjectcode stylometryen
dc.subjectchatgpten
dc.subjectAI codeen
dc.subjectGPT-4 generated codeen
dc.subjectauthorship profilingen
dc.subjectsoftware engineeringen
dc.titleWhodunit: Classifying Code as Human Authored or GPT-4 generated- A case study on CodeChef problemsen
dc.typeConference Paperen
dcterms.bibliographicCitationIdialu, O.J., Mathews, N.S., Maipradit, R., Atlee, J.M. & Nagappan, M. (2024). Whodunit: Classifying Code as Human Authored or GPT-4 generated- A case study on CodeChef problems. 21st International Conference on Mining Software Repositories. April 15-16 2024. Lisbon, Portugal.,en
uws.contributor.affiliation1Faculty of Mathematicsen
uws.contributor.affiliation2David R. Cheriton School of Computer Scienceen
uws.typeOfResourceTexten
uws.peerReviewStatusRevieweden
uws.scholarLevelFacultyen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record


UWSpace

University of Waterloo Library
200 University Avenue West
Waterloo, Ontario, Canada N2L 3G1
519 888 4883

All items in UWSpace are protected by copyright, with all rights reserved.

DSpace software

Service outages