It blew my mind when I found students can use multiple AI tools to overcome the possibility of detecting the use of AI to answer assignments and test questions.
I was on a family visit and sitting with several guests. One of the people I distantly knew was doing their assignment. That personally was literally copying the question from the assignment and pasting it to ChatGPT to find an answer. After receiving an answer from chatGPT, the person again copied the answer from chatGPT and pasted it to the premium version of another AI tool to expand or shrink the length of the answer and change wording, tone, and such. After using 2-3 AI-based tools to write an answer, the person then uses another AI-based tool to plagiarism checking software to check plagiarism. After ensuring there is no plagiarism as suggested by AI tool, the person submitted an answer with scientific references. The references cited were actually available online when checked.
This gave me a scary picture of how academia might be operating. For me, it's not that students used AI to answer the question. The major concern is students may not know what they were answering about or what their answer actually means in the real world. What they would do if they had to intellectually think about the same or similar problem in the real world? If students could not solve similar problems in the real world, how that would change the reputation of the instructor and the university?
On the other side, are universities even doing a reasonable job of meeting the market needs and needs of students? If that student who uses AI succeeds in solving similar problems they currently solve using AI in the future, is the university doing a reasonable job of training students in the right direction?
Also, are these academic dishonesties detectable while grading assignments?
I was so confused and speechless seeing this.