Academic integrity in the context of using ChatGPT and other generative AI systems in student research

Authors

  • Nataliia Komlyk Candidate of Pedagogical Sciences, Associate Professor, Associate Professor of the Department of English Language, National University of Kyiv-Mohyla Academy, Kyiv, Ukraine https://orcid.org/0000-0001-6506-4927
  • Maryna Mykhaskova Doctor of Pedagogical Sciences, Professor, Professor of the Department of Musicology, Instrumental Training and Methods of Music Education, Khmelnytskyi Humanitarian and Pedagogical Academy, Khmelnytskyi, Ukraine https://orcid.org/0000-0003-1248-3903
  • Iryna Krasylnykova Candidate of Pedagogical Sciences, Associate Professor, Associate Professor of the Department of Fine, Decorative Arts, Technologies and Life Safety, Vinnytsia State Pedagogical University named after Mykhailo Kotsyubynskyi, Vinnytsia, Ukraine https://orcid.org/0000-0002-3057-4000

DOI:

https://doi.org/10.5281/zenodo.18870992

Keywords:

generative artificial intelligence, student research, research ethics, digital technologies in education, authorship, academic assessment.

Abstract

The rapid integration of generative artificial intelligence systems into the educational and scientific environments leads to a transformation of the practices of performing student scientific work and, at the same time, raises the problem of observing the principles of ethical responsibility, authorship, and the reliability of results. The use of automated text-generation tools changes the nature of educational and research activities, creating new challenges for ensuring the transparency and objectivity of academic assessment. The purpose of this article is to examine the ethical and regulatory dimensions of employing generative artificial intelligence systems in student research and to identify the associated risks and opportunities from the perspective of academic integrity in higher education. Methods. The study applies methods of theoretical generalization, comparative analysis, systematization of scientific sources, and analysis of international recommendations and regulatory documents governing the use of digital tools in higher education. Content analysis is also employed to detect typical violations and inconsistencies arising from the incorporation of generative technologies in student work.. Results. It has been established that the use of artificial intelligence systems can both increase the effectiveness of educational and research activities and pose threats of incorrect borrowing, substitution of authorship, and the formal performance of research tasks. The study emphasizes the necessity of clearly distinguishing between acceptable and unacceptable applications of artificial intelligence tools, as well as fostering students’ competence in responsibly leveraging digital resources for scientific purposes. Conclusions. It has been proven that ensuring academic integrity amid the spread of generative artificial intelligence systems requires updating higher education institutionsʼ internal regulations, unifying requirements for registering research results, and strengthening the educational component to develop the ethical culture of scientific activity. The results obtained can be used to improve university policies in the field of digital ethics.

Published

2026-03-05

How to Cite

Komlyk, N., Mykhaskova, M., & Krasylnykova, I. (2026). Academic integrity in the context of using ChatGPT and other generative AI systems in student research. Pedagogical Academy: Scientific Notes, (28). https://doi.org/10.5281/zenodo.18870992

Issue

Section

Information and communication technologies in education