15

I am interested in knowing whether it is ethical to use ChatGPT to write the abstract for a schoolwork/paper I wrote. The paper itself is my original work. Is it OK to use ChatGPT to help with an abstract?

3
  • Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on Academia Meta, or in Academia Chat. Comments continuing discussion may be removed.
    – Bryan Krause
    Commented Oct 19, 2023 at 19:41
  • Why are you asking if it's "ethical"?
    – Nik
    Commented Oct 20, 2023 at 23:31
  • 2
    @Nik Probably because they want to know if it is? Commented Oct 22, 2023 at 0:38

9 Answers 9

79

To add up to the previous answers:

  • The best way to answer your question is simply to ask your academic institution about their policy (or their lack thereof) regarding the use of AI tools for scientific writing. They might be able to tell you that you're not allowed to use ChatGPT anyhow, that you may but need to acknowledge it, or to use your best judgement.
  • As previously mentioned, Nature recently released a series of articles highlighting the pros and cons of using AI tools in academia. Overall, my personal interpretation is that using ChatGPT can be beneficial for tasks for which the ratio "amount of time required to do it" by "epistemological importance of said task" is high. For instance, shortening an abstract to reach a certain word count, writing a summary for an internal newsletter, or writing a vanilla methods section, seem to be tasks that take a lot of time but could be delegated to ChatGPT without major issues (given proper human surveillance).
  • On the other side, AI tools should be avoided when dealing with sensitive tasks. Retraction Watch recently published about gross errors and fabricated references in papers generated with ChatGPT, which is terrible scientific practice.
  • Grumpy old man yelling at the clouds here: do not forget that, although boring and unrewarding, tasks such as writing an abstract, writing a review, studying a methods section, or summarizing the main ideas in a scientific text, are important competences in their own rights. Be sure to master them before trying to delegate them to ChatGPT.
9
  • 65
    +1 just for the last paragraph, though I'm another grumpy old man willing to yell at the clouds.
    – Buffy
    Commented Oct 18, 2023 at 13:10
  • 7
    If point 3 isn't done, then point 2 will be repeatedly done due to sheer ignorance. There used to be a correlation between grammatical correctness and general correctness, but ChatGPT effectively broke that correlation so people should be conscious of that. ChatGPT can and will fabricate non-existent references. Extremely dangerous if you're a lawyer and you cite cases that don't exist.
    – Nelson
    Commented Oct 19, 2023 at 0:55
  • 10
    Grumpy old lady here. I was thinking about this for a while. I came up with the question; if a student's work cannot be distinguished from the work of a bot, what are they doing at university? What is the university teaching them? Maybe the skill of gathering and collating data is best left to a machine. Like ploughing. Yes, you can use a plough unwisely or in a criminal manner. But it saves a lot of work-hours.
    – RedSonja
    Commented Oct 19, 2023 at 12:09
  • 4
    In my understanding, ChatGPT has the same understanding of what passes through it as a printer does, i.e. none, beyond e.g. the printer following formatting rules and being able to execute computer language that drives generation of graphics. Go ahead and generate a summary -- and then treat it as if a rabbit had eaten papers on the topic and left a trail of scraps: check every word. Commented Oct 19, 2023 at 22:09
  • 5
    @RedSonja if a chess player's work can't be distinguished from the work of a bot, what are they doing sitting at the table? What's the point? You could save so many work hours... -- Not that I per se disagree with letting machines do boring data-gathering jobs. This has been done for decades. But there's a huge difference between writing an algorithm that gathers data according to a precisely understood specification, and letting an AI gather data according to some statistical patterns from its training that no human understands and may well be riddled with all kinds of weird biases. Commented Oct 20, 2023 at 20:09
13

This is an entirely personal view.

There are ethical and unethical uses of such tools. Using them to produce and publish the abstract at least borders on the unethical.

Note that ChatGPT and similar things have no mind. They have no morals. They have no judgement. They aren't intelligent in any real sense. They can and do produce garbage with no warning. If they did have mind and intelligence then publishing what they produce without citation would be plagiarism.

But, if you treat them just as tools to get some possibly interesting feedback, then you could probably avoid ethical dilemmas. We use other "mindless" tools, of course, such a grammar and spell checkers.

If you assume that everything produced by these tools is possibly wrong and you use them only for suggestions or abstractions on things you have written yourself, then you are probably fine, provided that you adhere to the disclosure rules of any publishers.

But, I also point you to the last paragraph (especially) of the answer of Camille Gontier

And I'll also note that the tools seem to be getting more dangerous as they now can probe the web, which is filled with disinformation on many important topics. Mind and morals are required to sort out this cruft and these tools have neither.


There are, or seem to be, completely ethical uses of these tools. One that seems interesting and possibly useful is the examination of large numbers of x-rays to search for subtle things that indicate the possibility of cancer. But, even here, the results need checking by skilled humans (mind and judgement) to evaluate the output.

The problem isn't the tools themselves, but the use of the tools, especially when it is misunderstood what they can and cannot do. As with other tools and techniques, one must use intelligence to guard against both false positive and false negative errors.

4
  • 9
    I guess indeed, depends how you use it. I would argue that chatGPT is a great tool to create a first draft of an abstract, but the worst tool to create the final version of an abstract. Commented Oct 18, 2023 at 14:34
  • My experience, as a non-native, it's the opposite. It's the best tool to take my first draft of an abstract and make it into a much better prose (and often shorten it as a result).
    – Luca Citi
    Commented Oct 19, 2023 at 22:02
  • 1
    What do you think is more likely to be wrong a spellchecker or ChatGPT? If a spell checker did not highlight a word you can be pretty sure it's in one of its dictionaries (or equivalent), the only reason why the word would still be wrong is because, you, human, meant a different word which you hardly can blame on the spellchecker being "wrong". Commented Oct 21, 2023 at 4:34
  • I would strongly disagree with the notion that GPT-4 has no intelligence. It's not a human-like intelligence (it's superhuman in some ways, subhuman in others, and deeply alien in the way it does things under the hood), but I find it hard to come up with any test of reasoning not contingent trivially on embodiment that GPT-4 would fail but, say, a chimpanzee would pass, and I would strongly contend that chimpanzees have some intelligence. Also not every contribution by an intelligent entity to a paper requires citation (for instance, reviewers often only get anonymous general acknowledgement).
    – Polytropos
    Commented Oct 21, 2023 at 10:28
11

The issue here might be less of an ethical one, and more of a functional one. Many journals and nearly all school/university assignments either forbid the use of ChatGPT, or require the use of ChatGPT to be explicitly noted.

Check the submission requirements to make sure that the use of ChatGPT is permitted. If it is, then you can think if its use is ethical afterwards.

1
  • 3
    This is the most important point. Abstract moral arguments about using AI in general don't matter if there's a specific policy that prohibits their use. If you turn in work created by AI, knowing that there is a prohibition on submitting such work, you are committing fraud, which is always unethical.
    – barbecue
    Commented Oct 19, 2023 at 15:21
3

It is not an easy question to answer! It all depends on one's preference and how one looks at it. I'd consider writing an abstract and then rephrasing it with tools like chatGPT or asking for help when I'm stuck with a code to be ethical. On the other hand, feeding the paper and asking it to write an entire abstract based on it would be something I'd not do. There was an article published in Nature on Monday addressing similar things. One may have a look at it:

https://www.nature.com/articles/d41586-023-03235-8

1

In many of the conferences I'm involved with, the committee typically permits the use of ChatGPT for rephrasing, but not for original content creation. Thus, its acceptability largely hinges on your intended application. If you're aiming to rephrase existing content, it's generally acceptable. However, always consult the guidelines of the specific journal or conference you're targeting, as some might prohibit the use of ChatGPT altogether.

2
  • I think this one is the best answer
    – Snared
    Commented Oct 19, 2023 at 6:25
  • 2
    OP is doing schoolwork; it's very unlikely to submit schoolwork to a conference.
    – Bryan Krause
    Commented Oct 19, 2023 at 15:06
1

You have two different questions here. One in the title, one in the body of the question.

Is it ethical ?

That's a moral question without an objective answer. I personally would regard it as unethical as you're not doing that part of the work yourself and it's important work.

...because...

Is it OK to use ChatGPT to help with the abstract ?

I would say NO for an objective reason.

The abstract is a very important part of your document. It is the part of the document which, after the title, tells people if they should or should not invest their valuable time in reading the article, which they would only want to do if the article covered a specific topic in a way that was useful to them.

Put another way, it's the way we filter out all the articles we don't care about from the ones we do.

It is, therefore, far, far too important to both writer and reader (and publisher if that's relevant) to be left to chance to a piece of software with no intelligence at all (calling them AIs doesn't make them intelligent, that's just marketing, i.e. a lie).

You cannot trust the "AI", so you have to check the AI delivers a proper summary of the article. You'll basically have to put in as much effort as you would writing the abstract yourself in the first place.

Writing an abstract (or summary) is a valuable skill beyond academia.

Learning to write these types of summaries is a hugely valuable skill which has applications in all sorts of ways. The reality is that when, e.g. your boss wants a report, you'll be lucky if they even read the summary ("abstract") so that bit of writing has to be good. Learning to do this is enormously useful in careers, in business even in private life.

Using ChatGPT or similar does not teach you anything and leaves you at the mercy of it's (considerable) limitations and (frequent) errors.

4
  • 1
    Or, think of the "I" in "AI" as meaning "Intelligence level of an earthworm". Too many people assume by default that "Intelligence" means the human-level intelligence which they have, because that is the majority of their experience. Commented Oct 19, 2023 at 22:04
  • 1
    Yes! The text produced by ChatGPT tends to sound very nice, but it's also very vague and it does not know which specifics of your research are important. It might be useful for an introductory sentence, but is not good at presenting specific results, which are important for an abstract. Commented Oct 20, 2023 at 14:06
  • The argument that checking that an abstract is good takes as long as writing it yourself is clearly wrong for papers that one has authored oneself. For instance, it's perfectly consistent to think that an unreliable abstracting AI might have to sample 20 times from its output distribution to find a good abstract, but checking 20 proposed abstracts for a paper I have written until finding a good one is probably still faster than writing one good abstract myself. And that assumes a user who does plain rejection sampling, not one who iterates upon the AI output intelligently.
    – Polytropos
    Commented Nov 8, 2023 at 9:21
  • @Technophile Humans would be utterly lost, for the most part, if they had to play GPT-4, i.e. answer the same range of questions in the same range of languages it gets, with say 20 times more time to answer than it takes. On the other hand, I think a human could play earthworm using appropriate remote controls, and GPT-4 could probably convincingly play human for a limited time (20 minutes?) if communicating online and prompted correctly. I'd say that suggests more than earthworm-level intelligence.
    – Polytropos
    Commented Nov 8, 2023 at 9:26
0

Is it unethical to use chatGPT to create abstracts?

Regarding abstracts in research publications:

No. We are doing research. Our goal is improving human knowledge. Publications are just a way to convey some novel human knowledge.

The abstract is a condensed, human-friendly version of the publication. If some program helps researchers write abstracts so that they can spend more time doing actual research, then there is nothing unethical about it, except if it plagiarizes other publications.

Ethics put aside, some publication venues have an explicit policy on the use of AI to write publications.

6
  • 12
    ChatGPT and other LLMs absolutely do plagiarise other publications. They rely on a huge database of texts, collected from the internet without author's permission, and they never acknowledge original authors. Commented Oct 19, 2023 at 7:39
  • @DmitrySavostyanov here we are talking about summarization. It makes plagiarism much less likely. Commented Oct 19, 2023 at 14:27
  • 1
    OP seems to be doing schoolwork, not research.
    – Bryan Krause
    Commented Oct 19, 2023 at 15:04
  • @BryanKrause thanks, I clarified the answer. Commented Oct 19, 2023 at 15:07
  • 5
    @FranckDernoncourt LLMs "borrow" their words and phrases from the corpus of texts published on the internet. Many authors published their texts for free never gave consent for their texts to be used to train a close-source LLM to benefit a for-profit commercial company without compensation and acknowledgement. LLM won't be able to "summarise" without having access to the large corpus. Commented Oct 19, 2023 at 17:10
0

Whether it's "ethical" depends on how the abstract is used.

If it's used to judge the author, then yes, since it's not your work. If it's just to get scientific work done, it's still relevant, since language is compressive, and same words from it vs. the author warrant different interpretations. If the author agrees with the output, then it becomes irrelevant (to second point).

Personally: if I can't do it myself, I refuse to let others do it, as to me that's weakness, and I shouldn't be in this field. If I do great science but can't write, then that's who I am, and I won't pretend otherwise. After I master a craft, only then I can delegate it as mindless work - e.g. I'm totally fine with using calculators, since I've mastered arithmetic, and I even used Wolfram Alpha for much of my more non-trivial homework in college, but always ensured it wasn't impeding my learning.

That said, personally, I wouldn't mind ChatGPT writing the entire paper for me1 (minus doing the science of course). Because I know I'm perfectly capable of doing it myself, and have done it many times. Fat chance it does it as well as I, but that's a matter of time. If it does it better than I, then it becomes a problem, though in that case I'd try to learn from it so it's no longer the case, or make a note in the paper that AI assistance was used. However, can't say I endorse "as long as you think you've got it" as ethical, since it assumes honesty and correct self-evaluation; there'd need to be some external check (e.g. proof of prior work).

1: but I'd not do it for other, ethics-unrelated, good reasons. I've also not thought hard on this, take with grain of salt.

1
  • The other question is: how far can you trust the result? What are the equivalent error bars? Considering that it has the same understanding of what it is processing as e.g. a laser printer. Commented Nov 23, 2023 at 5:46
-1

I am relatively new to academia.stackexchange.com. This is my first answer, though I have upvoted several of the answers above. I disagree with @StevenG above, so I was not able to down-vote his answer.

Is it ethical ?

That's a moral question without an objective answer. I personally would regard it as unethical as you're not doing that part of the work yourself and it's important work.

I believe @StephenG is wrong and moralistic. This is an ethical question, not a moral question. Morality and ethics are different subjects never to be confused -- do so at your own peril and risk of your reputation being labeled as a pariah. Ride out of town on the same high horse whence you came.

The institutional bans are highest priority to observe and obey; the "functional" questions are second most important. There is no longer a moral obligation to suffer when learning how to write. Learning how to write can actually be done faster using AI tools like ChatGPT. Gone are the days when people have to struggle with syntax and grammar. Now people (young and old) can focus on higher cognitive activities.

...because...

Is it OK to use ChatGPT to help with the abstract ?

I would say NO for an objective reason.

Your "objective" reasons are not objective, they are personal opinions imposed on others.

If the academics in the room think people should suffer just because you or I did when we were growing up 20+ years ago, don't impose your moralistic suffering on someone else. Be practical, not canonical.

Writing an abstract (or summary) is a valuable skill beyond academia.

Learning to write these types of summaries is a hugely valuable skill which has applications in all sorts of ways. The reality is that when, e.g. your boss wants a report, you'll be lucky if they even read the summary ("abstract") so that bit of writing has to be good. Learning to do this is enormously useful in careers, in business even in private life.

Using ChatGPT or similar does not teach you anything and leaves you at the mercy of it's (considerable) limitations and (frequent) errors.

The last statement is flat out wrong, just preachy criticism of technology you don't understand well enough yet.

It is up to the user to learn from their use of the technology, and learn how to use it most effectively to write good prose, abstracts or whatever the task at hand is. It is a huge (and ethical) time saver when used correctly. The limitations are disappearing everyday. The frequency of errors is going down fast as the LLM models ingest more good examples of writing and source code.

If the author uses it as a writing "assistant" (aka copilot, NOT substitute) and supplies the main thoughts and gets a first draft, then learns from it, and improves it, what matters is that the end product (summary) is accurate and good writing. Pairs programming is the MOST effective way to write good quality code and pseudocode, so why not pairs writing.

In two to three years, the quality of writing by AI writing assistants will be almost indistinguishable from some of the best newspaper writers. Ye who resist how to apply it well shall perish!

1
  • It looks like this is a criticism of another answer, and not an answer in and of itself. Is my assessment true? If it is, I suggest adding an answer, and making that the focus. If it's not, I suggest making that obvious (because it's not currently). One way to add super long comments to other people's answers is to write them out on a pastebin.com paste, and put the link as your comment (should you not want to answer the question). FYI: I'm not the one who downvoted your answer. Also, make sure to be friendly and such. Commented Oct 21, 2023 at 4:37

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .