Two years after the release of ChatGPT, the technology has had a drastic effect on education. More students keep getting involved with generative AI to complete tasks and exam submissions, handing in chatbot-written worksheets presented as their own to earn marks, credits, and even degrees.
New evidence from the British House has re-established a salient concern: teachers mostly fail to detect AI-generated academic work.
The widespread use of AI for academic work can seriously jeopardize the integrity of high school diplomas and college degrees. As it raises great concern about unqualified entries into critical professions such as nursing, engineering, and firefighting, it is becoming dangerous if not disastrous for the person who lacks genuine knowledge.
Ironically, some schools would facilitate such a problem by allowing AI usage but at the same time prohibit the technology that is meant to catch the very activity such as aiding in academic dishonesty.
Also Read: Ahsan Iqbal Reviews Progress of Jinnah Medical Research Center Project
A recent study at the University of Reading, U.K., by Peter Scarfe and colleagues, showed a worrisome lack of success among educators in discovering AI-generated work. The researchers submitted basic AI-generated assignments under fake student profiles, and a staggering 97% went undetected. Even more troublesome, the report implies that this 6% detection rate is likely overestimated because it does not represent reality in actual cheating scenarios.
Here is what the report said:
Overall, AI submissions verged on being undetectable, with 94% not being detected. If we adopt a stricter criterion for “detection” with a need for the flag to mention AI specifically, 97% of AI submissions were undetected.
The difficulty posed by AI to find out its works is not anything new. According to a study of the University of South Florida, even trained linguists had trouble telling apart AI-generated content from that produced by people.