95

For many years one of the main problems for academic essays and BA/MA/PhD theses was plagiarism. However with the recent advent of artificial intelligence (AI) that can write high quality texts (e.g. ChatGPT) a fresh concern has risen: was a text written by a student or by an AI?

Plagiarism scanners do not detect AI texts (I tried it!) because they are not copied and pasted. Even if you have a certain suspicion that a text might be written by an AI (e.g. because an overall direction in the text is missing or the text is just too good to be true given the student's previous essays) there is no real way to prove the suspicion.

How to deal with this new problem? Yesterday we had a faculty meeting where we discussed this issue of AI texts but no real conclusion - so I was thinking of getting some input from here ...

2

18 Answers 18

121

One general approach is to make sure the assignment is not bullshitable.

ChatGPT generates bullshit, i.e. text that attempts to resemble genuine responses to the prompt, but does not attempt to actually be correct. The AI does not generally understand what it means to be correct, it just produces relevant text (relevant text is often correct, but often incorrect).

Unfortunately, many students use a similar process, so it can be hard to distinguish. So my suggestion is to design prompts where bullshit results in a bad grade (and is easy for you to detect), even if an automated tool cannot detect cheating.

  • ChatGPT uses nonexistent, made-up references. Ask students to submit references in a format that can be easily spot-checked.

  • Ask students to provide drafts of their work, incomplete versions showing their thought process, and maybe errors they made on the way.

  • Avoid open-ended prompts like "write X words about Y".

  • Prefer prompts with a global goal and require each local part of the essay to contribute to that goal. Take off points in the rubric for parts of the essay that do not clearly support the global goal. I.e. take off points for rambling or unnecessary material. Take points off for good material if it is not shown how that material contributes to the global goal.

  • Do not prompt for content for content's sake. Instead of a minimum word count, give a maximum word limit and require students to accomplish a definable goal within that limited space.


Caveat: I haven't tried this, my background is more in AI and less in essay assignments. Also, this answer is focused on essays because this kind of cheating on a longer text, like a thesis, should be much easier to detect.

Example reference with similar opinion: ChatGPT Is Dumber Than You Think (The Atlantic)

20
  • 59
    To rephrase in a less flattering way: if a ChatGPT generated answer would result in a good mark for your homework assignment, the problem is with the tasks assigned for homework, not with the student that used ChatGPT.
    – quarague
    Commented Dec 16, 2022 at 9:22
  • 11
    I was trying to explain to a colleague why I was actually quite excited about ChatGPT's emergence, and this articulates it very well - a forced push towards productive, shorter essays, which I hope also sets people up to write more clearly in their future papers. Fewer bullshit exams or assignments, simply because it's trivial to generate answers for them. I think we are going to need to teach writing to a higher standard, but, honestly? I think that's a great thing for research.
    – lupe
    Commented Dec 16, 2022 at 14:30
  • 10
    The problem with requesting drafts is that not all of us write (or keep) drafts.
    – Mark
    Commented Dec 17, 2022 at 4:08
  • 6
    Non bullshitable assignments might actually end up wrecking a bunch of humans. A lot of people get through most of their essay classes writing bullshit to a degree (I wish there was a non vulgar term for this but I know what you mean). ChatGPT and successively better AI writers are effectively already displacing this class of student Commented Dec 18, 2022 at 3:15
  • 9
    @SidharthGhoshal I consider that a feature, not a bug... The fact that some students survived only writing BS was already a problem in the education system, and this is just forcing us to face the problem and solve it. Commented Dec 18, 2022 at 15:48
49
+50

You ask the writer about the text

Frame change. You are not worried about computers writing papers. You are worried about people falsely claiming to be writing papers.

If a human writes a paper for another human you have the same problem. And the same solution: ask the writer or interview them about the text.

A human that generates parts or entire papers automatically will have a very hard time explaining said parts.

9
  • 6
    This is a great answer. Do we need new rules to account for AIs doing things? I don't see why. The problem of AIs is no different than the problem of humans. How do you know a student didn't subcontract their paper from someone in Bangladesh? How do you know they wrote it themselves? I don't know the answer to that, but it's an old question, and the AI problem is just the same old question.
    – JamieB
    Commented Dec 16, 2022 at 19:33
  • 4
    To clarify: Are you suggesting that the instructor interview every student in the class, or just the suspicious ones?
    – Dan
    Commented Dec 17, 2022 at 0:54
  • 7
    Remark (also for the commenters): note that there are countries where exams with an oral part have always been the norm. Yes, for everyone. Commented Dec 17, 2022 at 8:36
  • 2
    @EthanBolker yep, the problem with this answer is that it doesn't scale. Sure you can do this for a small classroom with an occasional writing assignment. But no one is going to routinely do this for bigger classes with routine writing assignments.
    – eps
    Commented Dec 17, 2022 at 23:04
  • 5
    @JamieB this completely ignores scale, there is a GIGANTIC amount of difference in perceived morality, money, time, etc etc etc in firing up chatgpt and outsourcing a writing assignment. It's like comparing the ENIAC with a smartphone. yes, they are both computers, but at some point the differences are so big it becomes a different thing entirely. People could always cheat but this changes the game in big ways. It also begs the question of how this stuff should work -- if you can use chatgpt to generate 90% of the paper and all you have to do is a little cleanup .... shouldn't you do that?
    – eps
    Commented Dec 17, 2022 at 23:12
23

An interesting idea on this topic comes from Ben Thompson at Stratechery: instead of banning the use of AIs, require it - the students' job is not to produce an essay, instead they have to check and correct what the AI says about the topic.

A quote from his article:

Imagine that a school acquires an AI software suite that students are expected to use for their answers about Hobbes or anything else; every answer that is generated is recorded so that teachers can instantly ascertain that students didn’t use a different system. Moreover, instead of futilely demanding that students write essays themselves, teachers insist on AI. Here’s the thing, though: the system will frequently give the wrong answers (and not just on accident — wrong answers will be often pushed out on purpose); the real skill in the homework assignment will be in verifying the answers the system churns out — learning how to be a verifier and an editor, instead of a regurgitator.

Ben argues that the skills required to check and correct the unreliable AI's output are more relevant in the modern world than the skills required to write an essay in the first place. Perhaps as a bonus, the students will also put 2 and 2 together to realise that if they have an essay to submit to a different teacher, they can't expect ChatGPT to write a decent essay for them.

9
  • 5
    An extension this would be a question along the lines of "I asked an AI to write two different answers to the question '.....' discuss these answers, the ayers and weakness of each, and which is better" Commented Dec 19, 2022 at 15:01
  • 1
    This is the only correct, non-Luddite answer. Fighting the future is futile, instead we should embrace it. Commented Jan 6, 2023 at 5:50
  • @JonathanReez "This is the only correct [...] answer" - sorry, that's patently not true. The ability of doing quick estimates in the mind rather than relying on the pocket calculator comes in useful everytime when you have to check whether something is in the right ballpark. And I am not talking just science. Knowing what an increase/reduction of 2% of your salary means is a very useful skill to have. Thus, knowing how to do approximate arithmetics without a calculator is a critical skill even if everyone carries a powerful computer with them at all times. So is writing. Commented Mar 11, 2023 at 15:13
  • @CaptainEmacs The goal of the university is to produce members of society who can produce useful intellectual work. How they produce said work should be of zero relevance. Commented Mar 11, 2023 at 21:50
  • 1
    @JonathanReez I don't think there's any disagreement about your first sentence, but the "how" is relevant insofar as there are methods and practices which are likely to produce better intellectual work and one of the roles of higher education is to instil those practices. What I think you and Captain Emacs disagree about is the scope of those practices. Personally I would tend to agree with you that some of the skills we're currently expecting students to learn are going to be unnecessary by the time they would be relevant. But I also think it's better to learn something that might not be ...
    – kaya3
    Commented Mar 11, 2023 at 22:03
16

I want to raise three points to guide the discussion, but they're too long for a comment, so an answer it is:

  1. We should assume NLP models will continue becoming 1. better and 2. widespread. Consequence of (1): it will eventually become impossible for the well-trained human eye to sense that an essay was written by a machine. Consequence of (2): the solution of a "textual fingerprint" for identifying an auto-generated text, as mentioned above, won't be feasible. Currently, this could still work, because there is only one company offering one model for generating ChatGPT-quality text, namely OpenAI itself. Fast-forward 2 years, and there will be many such companies, perhaps even explicitly aimed towards high-schoolers. In that case, a teacher would have to go down the list verifying the fingerprint with each service, and such a service would be foolish (in their own market interest) to offer such a service in the first place.
  2. We should view this in the broader and existing context of ghost-writing. Some take solace in being able to discern the "bullshittery level" of ChatGPT, but this is tackling the wrong problem. For decades, students have paid others to write assignments for them, and obviously, the task of detecting those texts has nothing to do with detecting that the resulting text was human-generated, because it was -- just not by the right human.
  3. Some have claimed that something like a thesis cannot be generated by a machine, because they "cannot be creative". We should refrain from these kinds of statements, because 1. even search algorithms can be creative, and 2. there are, sadly, many disciplines in academia where innovation hardly exists and bullshittery is the entire game. Ask ChatGPT to write an opinion piece, and it'll give you an essay at high-schooler level. Then ask it to do it in an academic style with complicated words, and you've entered the domain of master's theses (or higher!) in some domains.

Perhaps the take-home opinion essay is just dead, and the future consists of technical reports and intensively researched term papers. Perhaps a fingerprinting system will be the future, but it will be the students whose capacities will be fingerprinted and tracked through time, not the texts.

1
  • 3
    great answer, it really amazes me how many people think this tech is going to always be controlled by well meaning researchers. and how many people that don't realize how quickly this tech is evolving.
    – eps
    Commented Dec 17, 2022 at 23:42
14

While the idea of ChatGPT writing entire MA/PhD theses is certainly entertaining, right now the technology just isn't at that level. It can write multiple paragraphs at a time, and maybe a human can manually piece those together into a proper "paper", but it's limited in what the human can feed in. It's really a glorified chat bot, and it's not designed to write essays for students. For example, there's no supported way you can feed in an article/story and have ChatGPT respond to it. Moreover, there are simply limits to ChatGPT's semantic understanding.

In the future I suppose some companies might make a 'nefarious' version of ChatGPT and monetize it, considering all the various companies we already have that monetize academic dishonesty. When the technology is more at that level, hopefully there will be new tools on the detection side that can deal with the issue.

9
  • 1
    I can't comfortably let ChatGPT do something for me.
    – Neuchâtel
    Commented Dec 15, 2022 at 15:49
  • 1
    This is a great answer. The biased samples introduced to the public are just that. "AI" is a bubble. Wheres my self driving car? Lets see AI beat Zelda 1. Lets see it. Anyone who works hands on at a low level fundamentally understands the algorithm only does what its told, diametrically opposite the process of scientific discovery.
    – user156207
    Commented Dec 15, 2022 at 16:59
  • 21
    Perhaps the answerer doesn't know how bad, in fact, is the writing of some college undergraduates.
    – workerjoe
    Commented Dec 15, 2022 at 21:49
  • 8
    I think the point about things not being in the training set is a red herring. The training set probably doesn't include any movie scripts where the Teletubbies fight Superman, but it can generate those just fine. You might not be able to prompt it with a whole article, but you can take the key points from that article and form a prompt that gets you some response, then prompt it further to get it to change its response. (I first got a script where Superman is the good guy, then I asked it to make Superman the bad guy, then I asked it to also make the Teletubbies win.)
    – kaya3
    Commented Dec 16, 2022 at 0:11
  • 4
    Besides, most essay-style assessments are not asking the students to produce any new knowledge - for PhD theses of course, but for coursework at undergraduate level the "training set" very likely includes previously-written essays about the same topic. The main reason that we don't need to worry just yet, I think, is that ChatGPT tends to produce a lot of rubbish that only resembles good academic work in superficial form, not in substance.
    – kaya3
    Commented Dec 16, 2022 at 0:17
8

One might want to return to writing in-person essays. This would only apply in situations where the length and time involved made that appropriate, but for certain assignments (not theses!) one could imagine either taking class time for this, or proctoring in some fashion. Particularly as (current!) AI seems to "excel" with fairly short-form material, giving people 20 minutes to write such a short assignment in person may be a viable way to deal with it.

(That said, I like usul's answer better, and have used it for many years for certain longer assignments.)

1
  • Whatever happens, it needs to happen in class. Like maybe teaching them instead of them learning on their own by doing homework.
    – Mazura
    Commented Dec 17, 2022 at 2:45
8

Sort of an extension to André LFS Bacci's answer: consider an oral exam. It's not going to be ideal since oral exams have their own set of problems, but ChatGPT can't fake an oral exam, and neither can Chegg.

You could also use an oral exam as a check to see if the student wrote the work. It won't prove an AI wrote the essay, but it could indicate the student did not write it.

1
  • 7
    Difficult to do oral exams for 600 students in each of the 10 modules they take in a year. That's 6000 peak exams per year group, per year. Commented Dec 19, 2022 at 14:48
7

The International Baccalaureate (IB) is an international high school school curriculum used by many international high schools. Although it is for high school, one part of it can provide insight to the OP: the "Extended Essay" (EE), which is basically a 4,000 word essay.

For every student, the EE research and writing process includes three formal 20-30 minute "reflection sessions", which are interviews between the student and teacher, the purposes of which are to support the student's learning and progress, and to check for authenticity, i.e. to check if the essay was actually written by the student.

  • In the first reflection session, they discuss the student's proposed research question.
  • In the second reflection session, they discuss the student's first draft.
  • In the third reflection session, they discuss the student's second (and final) version.

This system has been used by the IB for a long time, and it seems to work, as far as checking for authenticity. Perhaps a modified system could be considered for university.

3
  • 5
    Yes, indeed, but quite labor-intensive. I think the questioner wants to know how do do this more "automatically", ironically to filter out "automatic" writing. :) Commented Dec 17, 2022 at 2:23
  • @paulgarrett I did not get the impression that the OP is only interested in quick fixes.
    – Dan
    Commented Dec 17, 2022 at 5:26
  • 1
    @paulgarrett ... "quite labor-intensive". But, in the future, we can get computers to do those interviews of the students. :)
    – GEdgar
    Commented Dec 26, 2022 at 13:15
7

The question points to an underlying development that was identified by Ray Kurzweil: The continuing application of the No True Scotsman argument to intelligence. What we consider "truly" intelligent has changed with the advancing capabilities of machines, on the grounds that a task that can be mechanized is by definition not a sign of "true" intelligence. Far into the 1900s, playing world class chess or being able to translate reasonably well between a dozen languages would have been considered a sign of the highest intelligence. So would have been the ability to write reasoned essays in college grade English about almost any topic known to mankind (which is what ChatGPT does, of course).

The progress made in information technology shows us that all these tasks can be done by mechanisms. Nobody in their right mind, apart from the occasional excited google engineer, would claim that these mechanisms are "truly" intelligent. Because we don't think of ourselves as mechanisms, we backtrack and change our classification of what we consider "truly" intelligent. Everything that is rule-based is obviously not truly intelligent: You can beat it with stupid brute force. Everything that is simply based on pattern recognition is not truly intelligent: There is no true "understanding" and no "originality". But alas: The texts are good enough to earn certificates and pass German college term tests. And this is just a prototype.

There are a couple conclusions here. We can either backtrack further and say:

  1. Much, if not most of what we do in academia is not "truly" intelligent. The amount of original, creative, essentially unpredictable work is small.
  2. Much of what we do professionally (and what college education prepares us for) is not truly intelligent work. Programmers, radiologists, lawyers: Just pattern recognition and -application.
  3. Our education and our professions are on the brink of being obsolete.

This is scary.

Or we hold our ground and continue to consider our education and professions at least somewhat intelligent. Then we cannot deny that we have produced intelligent machines. The google engineer was right. Rather sooner than later machines will be able to do any intellectual task we can do, including the ones one might currently consider original, creative, and essentially unpredictable. They will probably be able to perform them better than we do. In fact, they will probably become able to perform intellectual tasks that are entirely beyond our regular reach.1

This is even more scary.

My guess is that we will take a much bigger step back than ever before: We will redefine what it means not to be intelligent but to be human. We will be forced to realize that intelligence is not what defines us as human. It is probably not even art that defines us, or only insofar as art defines us as individuals.2 Instead, it is emotions: Love, compassion, passion, even hate. Machines are unable to feel and will be unable to feel for the foreseeable future.

P.S. Are you still waiting for an answer to your question?

  1. Using ChatGPT is not any more plagiarism than using a pocket calculator. If it gets you results, it's a useful tool.
  2. Therefore, don't be a Don Quixote. Instead of a futile attempt at preventing the use of ChatGPT e.a., embrace it. Kaya3 wrote an answer in this direction. The future of academia and humanity lies not in defending the indefensible but in employing the useful.
  3. Change the curriculum to stay relevant. Nobody teaches how to manually draw a root in algebra any longer, or matrix tricks. Try to teach things which may be hard for AI even in 20 years.

1 I'm not necessarily hinting at the prospect of a technological singularity which often involves an unpleasant quasi-religious sentiment; a much weaker development would suffice: Machines continue to improve on cognitive tasks (like recognizing cancer cells, designing mechanical things, predicting the weather, making investment decisions, driving a car). We increasingly find that they do it better than we typically do, and we increasingly rely on them. This is a gradual development without tipping points of any kind. (It is funny that the proponents of a singularity recognize that technological development is exponential but fail to see that exponential curves are emphatically void of singularities; quite to the contrary: They look the same everywhere. The discovery of fire, advent of agriculture or the industrial revolution have disrupted societies much more than a mechanical lawyer or programmer ever could.)

2 As today, different individuals would produce different art. This would include mechanical individuals, i.e. different neural nets, or differently trained neural nets (the equivalent of separately raised identical twins). Like today, experts (including, of course, mechanical experts) would be able to make an educated guess which individual (including, of course, mechanical individuals) created a given piece of art, or at least which tribe and era it is from (e.g. 17th century Flemish, 19th century Xhosa, or 202x Dall E3 lineage).

5

At the moment, and as a minimum, you need a clear policy on its use, perhaps forbidding its use. That isn't enough, of course, but you need to make it clear. That will ameliorate the problem slightly, as most students will comply if the policy is stated in a reasonable way that emphasizes the learning goals. The reasoning behind a policy needs to be made as clear as the policy itself (as usual).

Mid term, it is likely that AI solutions will emerge that can catch the use of such things with fair accuracy, maybe even good accuracy. Some are already underway. The might even provide a good balance between false positives and false negatives. The former might be handled if students were always subject to a follow up oral presentation of essays and such as is done with theses.

I think that the development of a detection tool is a worthwhile AI research project at the moment.

Long term, it is harder. I don't think (but am not certain) that ChatGpt in particular tries to obfuscate its use, but that can happen. At some point, we may just need to completely change the techniques we use to encourage and evaluate student honest work. It is worth spending some time with that now, and trying out ideas. Oral exams don't scale well, but are harder to misuse.

At the moment the AI text generation isn't very creative and a careful reading might catch a lot of it - especially for longer texts. But AI generated work and poor but honest work might be harder to distinguish.

As I understand it, ChatGpT doesn't have access to the internet (say, Wikipedia). That would change the game, perhaps, but might also make plagiarism detection easier.


I'll note that forbidding its use might not be the only proper policy. Use with citation might be considered as you develop such a policy.

13
  • 1
    I had downvoted this as theoretical. The line drawn for using writing aides is arbitrary. How is it any better if a student gets writing help from a human? The human is better at it and provides more contribution to a written article. The technology for material contribution does not exist. Enforcement of any policy is practically impossible. The entire discussion about regulation is a giant waste of time. Its planning for something that cannot happen in reality.
    – user156207
    Commented Dec 15, 2022 at 17:17
  • 5
    @user156207, note that they are already banned here with no really effective enforcement mechanism. And see arxiv.org/abs/2011.01314, helpfully provided by user/mod cag51. Look at academia meta for more.
    – Buffy
    Commented Dec 15, 2022 at 19:46
  • 4
    @user156207 you made your opinion quite clear enough with one of those comments, no need to repeat it. Anyway, a) lots of people, including experts of the domain, strongly disagree that there will be no problem “in our lifetimes” b) even if there was only a low probability of this happening, it would be haphazard to not prepare for it anyway c) already today ChatGPT is a problem – not in that it actually generates serious scientific content yet, but in that it easily generates such a quantity of convincing-looking bogus that it risks overloading the human gatekeepers. Commented Dec 15, 2022 at 20:02
  • 1
    @leftaroundabout just following up buffys questions. Im not saying im right. Im saying i disagree. I feel those "experts" are selling you something. I myself have expertise in prediction. I myself have seen a lot of promises that arent being kept. Alas. What are your preparations for my cat doing my homework in trigonometry?
    – user156207
    Commented Dec 15, 2022 at 20:06
  • 1
    @Magma, actually I recommend a policy concerning its use, not one forbidding its use necessarily. But some uses probably need forbidding, as plagiarism, though a bit weird here, is still a problem.
    – Buffy
    Commented Dec 16, 2022 at 19:38
2

In a few years, when current students enter the workforce, it seems likely they will be using AI-based language tools like they are using Google today. They will be using these tools to access existing knowledge, to polish the presentation of their results, to translate text that they have written in their native tongue into the language they are supposed to work in, or maybe even as research assistants that contribute genuinely new results. Their tools will likely be vastly superior to what is available now. I would argue we should prepare our students for that future.

With that in mind, I think a scalable and relatively future-proof way of dealing with AI assistance is to supply an AI-generated response to a homework assignment as part of the assignment. The instructor would set an essay assignment as usual, but in an additional step they would also use the best model they can get their hands on to auto-generate a response, maybe in an iterative fashion where the model first generates an essay outline and then fills in the chapters. They would then add the process they used to generate the essay to the methods section of this example essay. The students will then be graded on whether and by how much they managed to improve on this baseline. A brief critique and grading of the baseline could maybe be included by the teacher as well to show the students in what ways the AI-generated text is still failing.

In this way, the topic of AI assistance can be openly discussed in class; students get naturally exposed and can discuss different ways to use AI; they can discuss whatever limitations current AI still has; they learn about attributing credit; and they have to learn and ultimately demonstrate how to do better than the output of a current state of the art public model. Compared to schemes that rely on oral examinations, additional teacher workload should also be relatively low, as the baseline essay they will be producing will be produced in a highly automated way.

1

One IT website has called ChatGPT “Dunning-Kruger as a service”. It creates very convincing bullshit.

It’s very annoying and you might need to train people up to detect it (the convincing nonsense, not that it was created by an AI), but for a while at least you can judge these submissions just by their quality.

1
  • You could also call it, "Ask Wikipedia". I very strongly doubt that the content of the bot's output on any topic will vary from what Wikipedia already has to say.
    – EvilSnack
    Commented Dec 19, 2022 at 4:18
1

Just one small aspect: For student papers require the bibliography to also include YOUR library's call numbers for books and the addition of the link used for online access of journal articles or the call number if accessed print. This is also good for ghostwritten work, as it makes the ghostwriter have to use your library, upping the cost. ChatGPT will just write bullshit, or forget to add this. I know this is not standard for publications, but for training writers you get them to show the library work they (supposedly) did.

0

Ask the AI questions about the subject of the assignment. If you ask enough questions, you will gain a good idea of the AI's limitations, and also a good idea of the AI's writing style.

Then in the assignment, require the student to answer a question that the AI was unable to answer.

The answers you receive from the students will show the AI's limitations and style if they are using the AI, otherwise they will at the very least be the work of a live human being.

4
  • 1
    Example of a usable essay prompt that IA cannot legitimately respond to?
    – Dan
    Commented Dec 17, 2022 at 2:37
  • 1
    That would depend on the topic and the level of knowledge you want to see.
    – EvilSnack
    Commented Dec 17, 2022 at 2:41
  • Topic: Free will (from Philosophy). Level of knowledge: basic understanding of the standard arguments for and against free will. But I don't know what your area of expertise is, so it may be better for you to choose your own example. I think an example would strengthen your answer.
    – Dan
    Commented Dec 17, 2022 at 3:08
  • 2
    this answer assumes humans won't eventually learn to mimic AI generated text (which is already happening because of text choice assistance apps). it also assumes there wont' be different ai's with different types of voices and writing styles.
    – eps
    Commented Dec 17, 2022 at 23:45
0

Perhaps a direct inquiry to the student, such as "did you write this yourself" would provide the desired response.

1
  • If all students were honest, no exams would be needed...
    – Trang Oul
    Commented Mar 28 at 15:05
0

Going along with EvilSnack's answer, perhaps this should be seen as an opportunity rather than a threat. There are some reflections here about how schools used to view Wikipedia and similar sites. While there is certainly backlash against the increasing use of AI, unscrupulous people are already seeking to advance its use past the point where we can tell it is used. News articles, social media posts, forum messages, and even telephone calls are already hosted by AI bots.

Maybe have students generate an article or essay, and run them through the process of proofreading, fact-checking, and editing. Changes should be tracked along the way (a feature that's part of any decent word processor these days), and the students should be prepared to defend their actions and choices. This could also be a collaborative assignment.

Along the way, the kids may notice peculiarities and biases that we might not, and thus become better critics of AI-generated copy than we could hope to. This should also help inform them on the topics of misinformation and deception.

Even if AI is not widely adopted, academia should always approach these sort of matters thoroughly, rather than immediately regarding them as a challenge to tradition.

0

(I'm adding another answer because it's fundamentally different from the other one I wrote)

There are tools available now to detect AI-written text. One way is to get another AI to classify the text; the other way is to add a watermark to the AI-generated text. Watermarking is more sophisticated than physical watermarking; it modifies the output text (by the original text-generating AI) in a way that is detectable by computers, but not by humans.

See sources for classifiers and watermarking respectively.

-3

Students are expected to understand grammar and punctuations but grammarly is allowed and even recommended by many tutors.... students can use grammarly to improve the quality of writing. If grammarly was embraced why shouldn't other AI advancements be embraced. If 20 student's out of 100 use chatgtp to answer the same question won't turnitin flag some of them for plagiarism? Even when you use chatgtp you still have to read and edit the output ... you have to assume the answer you are getting has not been submitted elsewhere... It cannot write 15000 words dissertation or even 5000 words essay asking the student to analyze a case study... The student will need to ask several questions to get a good word count and also proof read and also format references.... At masters level Chatgtp is just like advanced Google .... It gives you the information you need instead of having to search multiple websites... But you still have to study the output and make it align with your intentions... The output will not always be original as more people use it and as more students submit to turnitin database or safe assign... Schools should wait before implementing any policy... Despite turnitins claim students are able to beat it using paraphrasing softwares... Turnitin is just another company trying to sell it's services... Institutions should not always jump at every service or product or update offered by the company.... PowerPoint slides with narration cannot be done by chatgtp... Student input is still needed... The process of creating the slide is the learning process it makes no difference whether the answer was provided by chatgtp Google or school library...

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .