10

Relevant question and answer

What is the grammar point that we changed the structure "S + V and V" into the structure "S + V, Ving"?

Is an answer written by an AI chatbot, such as ChatGPT, Bard or others acceptable?

Subquestions:

  • Should such answers be deleted?
  • Should they be flagged? How?
  • What if the post acknowledges that the answer is generated by an AI?
  • What if the post is generated by an AI but checked for accuracy by a native speaker?
  • What if the post is a correct answer?
  • What if the question is about text produced by an AI?
  • What about self-answers?

As far as I can tell, we haven't had this discussion yet. Over on ELU there is a blanket ban on AI generated text in question, answers and wikis - Do we want this?

There has previously been a question here on meta about Is it on topic to ask about English advice given by ChatGPT? The community decision was that "Yes, it is okay".

6
  • 2
    I think FF answer is an example where ChatGPT is being used as a resource. A native speaker such as FF will tell immediately if the response makes sense or not. If the answer is reliable or not. An answer by ChatGPT will not have the nuances, personality or a unique viewpoint of a human being.
    – Mari-Lou A
    Commented Apr 8, 2023 at 14:36
  • 2
    FWIW, if we reach a decision here to allow such answers, it's important to note that they have to be attributed as such.
    – Glorfindel Mod
    Commented Apr 8, 2023 at 16:10
  • 1
    What @Glorfindel said. The default position here is that when we see an answer written in bulletproof English, we tend to assume the poster knows what he's talking about. That's not the case with things like chatGPT, which is still quite capable of saying misleading things in flawless English! (But maybe not in another few months, especially given we're asking a Large Language Model to pronounce on the use of English - as opposed to watching it disgrace itself with simple arithmetic howlers! :) Commented Apr 8, 2023 at 19:10
  • 1
    That is a fundamental misunderstanding of the function of the AI. It does not know how it it is able to generate grammatically correct English. Its knowledge of grammar is as flawed as its knowledge of arithmetic, or history or any other subject. It doesn't know how its own brain works - its ability to generate text might lead you think believe that it must know something about grammar. It does not!!
    – James K
    Commented Apr 8, 2023 at 20:53
  • 1
    As an example of the issues that LLM can cause: I asked if "big" was an adjunct in "a big person". Chatgpt told me it was not. I challenged this, and it said it was mistaken and "big" was an adjunct. I challenged it again and it said that "big was not an adjunct". transcript: sharetext.me/ggub3l2xnd
    – James K
    Commented Apr 16, 2023 at 18:22
  • EDITED: New blog post from our CEO Prashanth: Community is the future of AI Feel free to cast your down or upvote, also in my name. It's stunning the abyss between management and workers (users). We shall see what lies ahead in the future but the pessimist in me says if LLMs continue to improve, as everyone says it will, online Q&As moderated by subject-matter experts will start dwindling.
    – Mari-Lou A
    Commented Apr 18, 2023 at 9:18

8 Answers 8

12

We want original content, not copy-pastes.

I think maybe the best way to frame it is by asking something like "Do we want answers that just link another forum" or "do we want answers that are literally just a copy-paste from wikipedia." If someone asks a question and I find an answer on Reddit that looks right, can I just copy-paste it here?

No, we'd prefer you didn't.

We want answers that are original content (created by you, our loyal and trusted user that we give all these medals to) and if you're pasting from any other site then you're not generating original content.

Really I don't think the rule should be "we don't want chatbot answers". The rule should be "we don't want copy-pasted answers, regardless of where you got it from". It's certainly good (and usually preferred) to cite a notable source but chatbots (or "some post I found on reddit") is not a "notable source" and thus should not be copy-pasted as an answer.

1
  • 2
    I just realized that I updated my answer with a very similar sentiment having not read through the new posts. I promise I didn't crib your answer! Here's your upvote from me...
    – ColleenV
    Commented Apr 12, 2023 at 18:48
7

NO

(mostly)

Should such answers be deleted/flagged?
A post with unattributed content from a bot is plagiarism and should get flagged for immediate deletion.

The rest of my answer assumes proper attribution.

What if the post is generated by an AI but checked for accuracy by a native speaker?
Let's take a step back first to what FF actually did, which is to provide a bare quote of what ChatGPT said without any other framing context, that's to say, with no stated support for the content. This kind of answer has no place on ELL or the SO Network at large.

Any learner who wants a direct answer from ChatGPT can install the browser extension or get an OpenAI account and ask the bot themselves. The reason people continue to ask questions here is that, for better or worse, they want answers from humans. A bare quote of an answer from a chat bot is roughly on par with a bare quote from an answer on Quora or Yahoo Answers. None of those sources are at all reliable, so merely quoting what they said doesn't even qualify as an answer in my books, and should be flagged as "not an answer" and deleted.

Now, back to OP's actual question. If FF had said they'd verified and approved of the content of ChatGPT's answer, then FF would be asserting the content as their own answer, though in someone (something?) else's words. This is a value-added version of a bare quote as it comes with the explicit approval of a high-rep user with a positive history of answering questions.

I'm divided on whether an explicitly supported AI bot quote is acceptable as an answer, and leaning heavily (please, please) towards unacceptable.

I can support it in that it's roughly equivalent to saying, "The answer is 'Yes'. User123 said it perfectly on Quora:...", which we do allow here.

But I also oppose it because it may reasonably lead to a flood of answers from people who claim to have carefully read the content, but in fact have not. Finding the exact answer the user is looking for on a site like Quora requires some labour and luck. You cannot build a high-rep account by Quora-mining. You could, however, build a high-rep account by posting unverified quotes from AI bots, but claiming to have verified them. Modding this behaviour would be be nearly impossible. The easiest way to deal with this is to ban AI answers altogether.

What if the post is a correct answer?
This is irrelevant. We don't deal in correct or incorrect answers on this network, only upvotes and downvotes. Correct answers will generally get upvoted and incorrect ones will get downvoted. But anyone with a couple months' experience here knows that a highly upvoted answer can attract an expert who shows that it's not correct after all.

What if the question is about text produced by an AI?
Questions about AI-produced English or AI-produced statements about English are just as valid as those produced by native speakers, English teachers and English student classmates, all of which are welcome here. In other words,

ChatGPT says X. Is that good grammar/true?

is as valid a question as

My friend says X. Is that good grammar/true?

Not asked, but I'm answering it: What if the language of the question itself is generated by an AI bot?
While I wouldn't delete a question if the asker used AI to help them phrase it, I would strongly discourage it because a learner couldn't be sure it says what they think it says, and might assume that because a bot made it, it's perfect.

The trouble I see is people gaming the system by telling ChatGPT things like, "Write a hundred questions typical of ELL" and posting a few every day and building rep. Even with attribution, this violates the intent of the site, which is to help learners, not to generate question content. Since there's no way to determine whether a user is of this type or of the type in the paragraph above, I think we need to ban all questions written by AI.

Overall
By banning all questions and answers generated by AI, we're not depriving anyone of anything, we're keeping closer to the purpose of this site, and we're making our lives easier, so I support a blanket rule of no questions or answers generated by AI. (This wording fully allows for questions about content generated by AI)

13
  • I've added another "edge case" to the list in the question: self-answers, in particular when an OP has had a question answered by a series of comments, and writes an answer which summarises those comments using an AI to polish the grammar and style.
    – James K
    Commented Apr 10, 2023 at 8:41
  • @JamesK Thanks for the update. I hope my position on self-answers is captured by my position on questions and answers generally: NO.
    – gotube Mod
    Commented Apr 10, 2023 at 14:19
  • What if I asked my husband and paraphrased his answer? Is that plagiarism or my own work? Almost everything I know I learned from somewhere else. What if I asked ChatGPT, found an old forum post and read a book then summarized all that in an answer? The underlying problem here is not AI. It's just exacerbating a problem we already have by making it easier to post low quality but authoritative-sounding answers.
    – ColleenV
    Commented Apr 11, 2023 at 13:25
  • @ColleenV If it's your words --even if it's someone else's idea-- then it's not plagiarism.
    – gotube Mod
    Commented Apr 11, 2023 at 16:31
  • And what if my words happen to closely match the words of the source I learned something from, because that is the most effective way to express that knowledge? What if I plagiarize something but rephrase a couple of things so it's more difficult to discern? I wonder if AI could tell us who I was probably plagiarizing ;) My point is this is more appropriate as a judgment call, not a blanket ban. You can't really tell over the Internet what words seem to have originated from someone's mind because we all incorporate things we learned from others to some degree.
    – ColleenV
    Commented Apr 11, 2023 at 17:04
  • 2
    Admittedly, it's not in the actual text of my Answer that triggered this debate, but in context it's quite obvious that I did in fact "verify and approve of the content of ChatGPT's answer". My primary reason for posting at all was by way of taking issue with someone else's dismissal of James' (the OP here) Answer on the grounds that it didn't cite any recognized authorities. But I never meant to imply ChatGPT was axiomatically authoritative - I posted more than one comment explicitly pointing out that I endorsed what ChatGPT said to the hilt... Commented Apr 11, 2023 at 17:50
  • 2
    ...because although it was essentially saying exactly the same thing as James' answer and my own "answer-as-comment", I thought the way it phrased things was better than either of our human contributions. James doesn't like it on principle, but I can't see the problem so long as anyone who posts text generated by an AI is prepared to stake their own reputation on its phrasing and accuracy. If it weren't for all the furore generated by this Meta post, I'd go back and explicitly add my endorsement to the Answer text, but in the circumstances that smells like "moving the goalposts". Commented Apr 11, 2023 at 17:54
  • @ColleenV Using chat bots and plagiarism are two separate topics. I'm saying both plagiarism and using AI bots (with citation like FF did) violate the spirit of the site and should not be allowed. We have a blanket ban against plagiarism, and it's a judgement call each time we enforce it. IMO, the same should apply to using chat bots.
    – gotube Mod
    Commented Apr 11, 2023 at 20:42
  • 1
    @FumbleFingers As I alluded somewhat to in my answer, while I trust some people to read and fully understand what a bot says before posting it under their own banner, I don't trust the general population here to do so. Some would read it and verify that it sounds true-ish, but not fully understand all of it. Others will have no regard for whether it's correct or helps their reputation among other users, and just hope it earns them rep points.
    – gotube Mod
    Commented Apr 11, 2023 at 21:19
  • Just in from Meta: “We’ve got a dedicated team working on adding GenAI to Stack Overflow and Stack Overflow for Teams and will have some exciting news to share this summer.” Currently the announcement stands at -2, I predict many more downvotes. Yet, AI ChatGPT is a tool that will not vanish into thin air just because it is banned on Stack Exchange–for how long it remains to be seen.
    – Mari-Lou A
    Commented Apr 17, 2023 at 18:19
  • O0ps, 12 DV 9 UV “As the AI landscape continues to evolve, the need for communities that can nurture, inform, and challenge these technologies becomes paramount. These platforms will not only offer the necessary guidance to refine AI algorithms and models but also serve as a space for healthy debate and exchange of ideas, fostering the spirit of innovation and pushing the boundaries of what AI can accomplish.” What's the expression for a kind of fake starry-eyed talk filled with buzz words? Anyway, banning ChatGPT answers, verified or only suspected will eventually lead to an exodus.
    – Mari-Lou A
    Commented Apr 17, 2023 at 18:37
  • @gotube The test of plagiarism depends on context. If you have a PhD and tenure and simply reword a colleague's work and pass it off as your own, you will still be considered a plagiarist. The lower bar for plagiarism of "as long as it's in your own words" only applies to undergraduate essays in my experience, where the entire objective of the student is really a form of academically endorsed plagiarism from multiple authors we call "writing 2000 words of opinion based entirely on secondary sources".
    – fred2
    Commented Sep 21, 2023 at 2:18
  • @fred2 We have a lower bar for plagiarism than undergraduate essays, so I think we're good ;)
    – gotube Mod
    Commented Sep 21, 2023 at 22:40
5

No

Answers should be primarily written by humans.

Answers that appear to be written by AI should be deleted. I'm unsure if "Very low quality" or "In need of mod attention" is the right way to flag such questions.

They should still be deleted if there is no attempt to hide the fact that they were written by an AI

They should still be deleted if the answer has been checked by a human, and they should still be deleted if the answer is correct.

Moreover, an AI cannot be considered a source for a claim about English Learning. You cannot use a quote from an AI as evidence to back up your claim about a point of English grammar or the meaning of a word.

AIs do have a role. Questions about the text produced by AIs are on topic. Questions which include, as part of the prior research, the advice given by an AI are on topic. And learners may use AI to draft a question in good English Thus we should be more allowing of use of AIs by Learners asking questions than we are of answers. In this way our policy should be subtly different to that of ELU.

Why

AIs like ChatGPT can produce convincing clear text. They can help learners by always being ready to chat. Their English is grammatically correct (at least it is more accurate than most native speakers) But they don't actually know anything about grammar!

They are, however, always willing to answer a question even if they don't know the answer. And they never know the answer, because they don't actually know anything.

There is a role for AIs in helping Learners to learn. But there isn't a role for them in answering questions. If one wants an answer from an AI, it is easy to get one. The fact that a question is being asked here means that the OP wants a human answer.

Using AI to write answers that are upvoted means that you're getting credit for something you might have zero knowledge of, which flies in the face of the Stack Exchange model.

This isn't about the quality of the answers per se. If a native speaker checks and confirms that the answer is reasonable that doesn't make an AI an appropriate tool here.

A blanket ban is fairer and easier to understand, and therefore enforce. It would be ridiculous to say, "Only native speakers are permitted to use Chatbots to write answers".

11
  • 2
    Can I upvote this with a percentage? Let's say 75%.
    – Mari-Lou A
    Commented Apr 8, 2023 at 14:43
  • There are plenty of non-native speakers here on ELL who are better qualified than me to "verify" AI-generated text in many respects. For example, they might know more about whether an AI's terminology accurately reflected that of recognized authorities, or they might be more aware of national differences). And certainly if Mari-Lou posted an answer making extensive use of AI-generated text, I would put a lot of faith in it. Native Anglophones are usually pretty good at "idiomacy" but that's just one part of learning a foreign language. NNS know other potentially useful things. Commented Apr 8, 2023 at 15:07
  • ...anyway, at least that's caused me to look again at my own Answer and revise it. Obviously I wasn't thinking straight when I said only native Anglophones would be qualified to curate AI-generated text. Commented Apr 8, 2023 at 15:27
  • 1
    @Mari-LouA What changes would you make to get to 100%
    – James K
    Commented Apr 8, 2023 at 18:47
  • This is a complicated issue. I knew of chatGPT when I posted against the earlier ELL Meta question (Is it on topic to ask about English advice given by ChatGPT?), but I'd never interacted with it. Now, it's my first port of call when I want a cogent summary of any movie more than 2-3 years old. But I can't go along with learners may use AI to draft a question in good English as a general principle. Many times over the years I've been criticized for correcting obvious errors in ELL Questions on the grounds that those mistakes give useful clues about the OP's competence... Commented Apr 8, 2023 at 18:57
  • 1
    ...which by and large I agree with. But nns using AI to "prettify" their answer text sounds much more fraught with danger! If superficially the text looks flawless, we'll be lulled into trusting the content as well as the phrasing, and it stands to reason we can't trust the average nns to reliably detect if the AI has (subtly or not-so-subtly) changed the meaning of the poster's intended Answer. Commented Apr 8, 2023 at 19:01
  • 3
    I disagree with this bit: “They should still be deleted if the answer has been checked by a human, and they should still be deleted if the answer is correct.“ If the answer is checked by a user, if it's correct, and it's helpful, then why should it be deleted? As long as the source is attributed and the answer includes supporting evidence, leave it up to the community to either upvote or downvote.
    – Mari-Lou A
    Commented Apr 8, 2023 at 20:08
  • I'm not willing to budge on that bit. I think the idea that some "elite" of native speaker should sit and just pedal out chatGPT answers by simply saying "I've checked it" goes against my central beliefs in what ELL is for. This is, perhaps the most important principle for me.
    – James K
    Commented Apr 8, 2023 at 20:45
  • This is a sensible answer, though the suggestion that we allow AI to proofread questions is ironic given that we (the human editors of ELL) aren't supposed to be proofreading questions. Anyway, "In need of moderator intervention" is the best flag to use, especially if you think it was AI written but no attribution is given (which is plagiarism to boot). The thing that I think about is that we've already seen spammers use unattributed AI (one got almost to 50 rep!) and I don't want to give them any more opportunities.
    – Laurel Mod
    Commented Apr 9, 2023 at 3:04
  • @JamesK IIUC, you're banning even if it's correct, and even if the person checking finds authoritative references to back it up (something a knowledgeable non-native speaker could also do). You're also banning in a way which human-written "from my experience as a native speaker ..." answers aren't. Yes, being knowledgeable enough to know if the answer is correct is a certain type of "eliteness", but banning AI answers doesn't get rid of that. Why is an answer "acceptable" if it comes from a foggy remembrance of Mrs. Smith's 7th grade English class, but irredeemably tainted if it touched AI?
    – R.M.
    Commented Apr 11, 2023 at 20:50
  • @R.M. Yes, that is s practical matter. If you know it to be correct and if you have authoritative reference to back it up - why do you need an AI? The trouble is that I find FumbleFinger's answer "I put your question to ChatGPT and this is what it said '...'" to be unacceptable (for the reasons above). I want to ban that type of zero effort answer. I want to discuss if that kind of answer should be banned and then the best way to do that.
    – James K
    Commented Apr 11, 2023 at 21:06
3

It depends on the answer.

AI is a tool, and using it doesn't inherently make an answer bad either in content or form. We already have mechanisms in place to handle poor quality and plagiarized answers.

Do we have to note when a grammar checker was used on our answer text? Why not? Grammar checkers are a more primitive version of natural language AI.

If we allow people to reference webpages without further checking the source of the text from that website we are already allowing AI generated text in answers--it's just been laundered by posting it somewhere else.

I think we should discourage AI answers because of the quality issues that abusing the tool causes. I think banning the use of any tool is a waste of focus and energy. Why shouldn’t I be able to use AI to help me generate example sentences to illustrate a particular usage in my answer?

On Stack Exchange answers ideally can be judged entirely on their content, not by who wrote them or what tools they used to write them. We ask people to cite their sources not just to avoid plagiarism, but also to help people judge credibility.

We should have more confidence in the system. If it can't handle an influx of poor quality answers, we should make the system more robust, not make essentially unenforceable rules.

We don't need to ban AI. Maybe we just need to clarify in How do I write a good answer? that answers need to consist of the author's own work and not be copypasta from other sources no matter how accurately it is attributed.

6
  • 5
    I'm opposed to AI-generated answers. If an OP wants an answer from a bot, they could simply pose their question to a bot instead of posting it here. We don't need ELL members acting as middlemen between ELL and ChatGPT. That said, I wouldn't mind so much if someone used an excerpt from a ChatGPT answer and then elaborated on what was said with some valuable insight from a human. If you can't compose a decent answer from your own knowledge, research, and synthesis, then don't bother posting a bot's answer here among what SE likes to call a "community of experts."
    – J.R. Mod
    Commented Apr 11, 2023 at 19:13
  • 1
    I don't see what's essentially unenforceable about it. Attributed AI is trivial to identify (obviously). Unattributed AI is harder to identify, but it's non-negotiable that it's not allowed — that's plagiarism. I haven't seen any "laundered AI" yet and it may not become a problem for many reasons. Still, we have methods to identify AI even when it's not attributed. And to be clear, your traditional grammar checker (Grammarly or what have you) is another beast: it doesn't do the thinking for you, nor is anyone suggesting that it should be banned/discouraged in any way.
    – Laurel Mod
    Commented Apr 12, 2023 at 3:12
  • @Laurel You haven't seen any laundered AI content because it's difficult to detect. The methods of detecting AI content SE has access to are about as effective as the methods used to detect sock puppets, which is not very except for the least skilled or laziest puppeteers. ChatGPT doesn't do any thinking; it interprets natural language prompts and generates relevant grammatical text. It can, just like a grammar checker, be used to improve writing. Answers that aren't your own work are already discouraged. There's no need to explicitly outlaw AI. ell.meta.stackexchange.com/q/1344/9161
    – ColleenV
    Commented Apr 12, 2023 at 17:07
  • @J.R. I agree with you that answers should include some insight from the author and not just quotes from other sources (even if properly attributed). I don't think that has changed just because there's a tool that makes it easier than ever to generate gibberish that looks like a credible answer. As of right now, people can't copyright entirely computer-generated content, which means that posting it would probably violate the TOS, although I'm not a lawyer and there's a pending challenge to that ruling. We should enforce the rules we have, not keep piling on new ones.
    – ColleenV
    Commented Apr 12, 2023 at 17:17
  • @ColleenV Could you please address J.R.'s issue that allowing chat bot content would encourage users to be middlemen between chat bots and ELL? Leaving the enforcement argument aside, how does it serve our community to allow it?
    – gotube Mod
    Commented Apr 18, 2023 at 22:32
  • 2
    @gotube I am not arguing that we should allow users to copy and paste content from a chat bot as an answer. I'm arguing that a blanket ban is unnecessary and will likely stifle appropriate usage of something like ChatGPT. The example on Meta where SE is investigating an AI tool to help users write better titles for their questions is an example of an appropriate use for the tool. I upvoted JR's comment and agree with it in spirit. I think banning all AI is both futile and shortsighted. We already have rules and guidelines against copypasta. Maybe they need to be more explicit.
    – ColleenV
    Commented Apr 19, 2023 at 19:10
0

No

The site is about learning English. While there is certainly no prerequisite that answers must come from native speakers, it is specified on the homepage that questions must be 'practical' and therefore answers must also have authority on practical use. This can only come from native speakers or those who have learned English to a sufficient degree to be able to answer authoritatively having put what they learned into practical use. Artificial Intelligence cannot fall into that group yet, and possibly never will.

AI draws from information found on the internet. Even with advances in machine learning and people saying how 'eerily human' it now feels, it is still just drawing from what other people have written. It may be able to 'learn' what is correct and incorrect but it isn't able to discern it. My own experience with ChatGPT has shown me that it will give me incorrect information, then 'apologise' and admit it was wrong when I challenged it. If it were to use unidiomatic language it cannot see the other party's response in real-time - it only gets a calculated response from the user. This is not the same as putting what you have learned about a language into practice. In fact, AI couldn't even be said to be a 'native' speaker of any language. A native speaker is someone who has a first language learned socially. Native speakers are sometimes said to 'think' in their language, even though they may speak others. A computer does not 'think' in English. If you asked it the same question in two different languages it may give you the same answer run through a translator. Some AI algorithms may just select texts in the language you speak to it in order to collate a response, meaning you could get conflicting answers depending on how you posed the question.

There are so many different opinions about AI at the moment. Those who are most impressed (or scared) by what AI can suddenly do tend to be people who don't know anything about it and are worried about how it may affect their livelihoods as artists, writers, teachers etc. Those who actually work in AI - which as a data analyst, I do to an extent as it is one of the tools available to me - tend to have a different opinion, that it isn't that sophisticated or remarkable and is nowhere near the degree of 'sentience' that some are suggesting, and certainly isn't capable of replacing human responses.

0

Yes, but ...

To me as a non-native English speaker, an AI chatbot is useless because I don't know whether its answer is correct or not. But if some native speaker makes use of it, then I will know the answer is checked by him personally - that is, he as a native speaker confirms the correctness of an AI chatbot's answer and, therefore, I can trust it.

But it's very important to note that the AI's answers are often very vague. So they usually have little value.

-1

No.

AI is inherently plagiarism from better-qualified authors who deserve credit for their work, and who do not deserve to have the fruits of their work endlessly recycled, without attribution, without any request for permission, and without any offer of payment in return for the vast financial gain accrued from the use and abuse of their intellectual property.

Nobody denies that AI depends on source material to create a facsimile that appears to be genuine. It cannot create anything. It has no ideas, no understanding and no actual intelligence. Even the AI art we all saw first a few months ago is not art, even if no artwork on earth looks exactly like it. It is infinite varieties of plagiarism and pastiche of other people's original work, stirred up and spat out as "original" art. But it cannot exist without the unsourced, unacknowledged, unpaid source artists whose work is fed into the AI meat grinder as the one indispensable ingredient in the unhealthy AI burger.

Nobody claims that AI can generate original thought. Yet. If all AI content is recycled versions of human creativity, it is a) work based on the unpaid labour of millions of humans and b) is nothing other than an unsourced and unverified pastiche of actual human intellect, originality and creativity. It is not possible that it can be "better" than the human sources it is based on. But it can be worse.

Ban it, not just because it sucks, but because it is an act of theft by large corporations sold back to the authors as a "free tool".

-3

Yes

I'm the user who posted a response from chatGPT as an answer in the question linked to by OP here.

Before doing so, I had already posted a relevant "answer / comment" myself, and carefully read the Answer posted by the current OP to that question.

I don't think it should matter that the actual text of my answer came from chatGPT (apart from the initial disclaimer pointing out the source, which I think we should insist on if such answers are to be permitted). The point is I think everything chatGPT says there is accurate and well-phrased - and probably more elegantly expressed than what I might have written myself.


EDIT - I think it's important to note that I'm only endorsing AI-generated Answers (that have been "curated / verified / endorsed" by competent native speakers OR "advanced" non-native Anglophones). I don't think it's a good idea to allow Questions primarily or solely focused on the idiomatic validity or factual accuracy of some AI-generated text.

4
  • Questions based on AI text are fine - the source of the text is fairly irrelevant. The problem is with people basically copy pasting the question to Chat GTP and then copy pasting the answer. What is the point? No as @Fumblefingers said "ChatGPT isn't just something to be "minimized" - it has no place here! " That is especially true of answers.
    – James K
    Commented Apr 8, 2023 at 18:47
  • 1
    Quoting a non-expert source like ChatGPT is comparable to quoting a Quora answer without any other frame. See my answer for more detail
    – gotube Mod
    Commented Apr 9, 2023 at 17:05
  • 3
    @gotube what if the Quora/ChatGPT answer is accompanied by other supporting evidence? What if the ELL user has more than 5K rep and has posted more answers than questions during their participation? If the answer is correctly attributed and includes a final assessment, who is going to judge that it must be deleted because it contains ChatGPT elements? If that happens, what will happen next? And I'm seeing it verified here on ELL, unattributed answers but cleverly reworded and phrased so as to appear natural and idiomatic. Probably the user instructed the AI to sound chatty and informal.
    – Mari-Lou A
    Commented Apr 11, 2023 at 17:25
  • @Mari-LouA My opinion has evolved in the last 48 hours. I would now say that if someone quotes ChatGPT --even with mounds of supporting evidence-- I consider that one ChatGPT quote as equivalent to quoting a keyboard monkey, so it should not be considered as having any value. I'll edit my answer above to reflect this when I'm on a more comfortable keyboard
    – gotube Mod
    Commented Apr 11, 2023 at 21:03

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .