23

Posting GPT answers on Law Stack Exchange is temporarily banned.

This policy has been adopted from Stack Overflow's current stance to give us time to work out our own. Most of the arguments for and against the use of AI generated content for coding questions are equally applicable to legal questions.


The Stack Overflow reasons are quoted below and, yes, we know there is not a one-to-one correspondence between their experience and ours. For example, while we are not (yet) receiving the volume of AI answers that Stack Exchange is, we have had one user account where all their answers are believed to have been AI generated.

Please see the Help Center article: Why posting GPT and ChatGPT generated answers is not currently acceptable

This is a temporary policy intended to slow down the influx of answers and other content created with ChatGPT. What the final policy will be regarding the use of this and other similar tools is something that will need to be discussed with Stack Overflow staff and, quite likely, here on Meta Law Stack Exchange.

Overall, because the average rate of getting correct answers from ChatGPT is too low, the posting of answers created by ChatGPT is substantially harmful to the site and to users who are asking and looking for correct answers.

The primary problem is that while the answers which ChatGPT produces have a high rate of being incorrect, they typically look like they might be good and the answers are very easy to produce. There are also many people trying out ChatGPT to create answers, without the expertise or willingness to verify that the answer is correct prior to posting. Because such answers are so easy to produce, a large number of people are posting a lot of answers. The volume of these answers (thousands) and the fact that the answers often require a detailed read by someone with at least some subject matter expertise in order to determine that the answer is actually bad has effectively swamped our volunteer-based quality curation infrastructure.

As such, we need to reduce the volume of these posts and we need to be able to deal with the ones which are posted quickly, which means dealing with users, rather than individual posts.

So, for now, the use of ChatGPT to create posts here on Stack Overflow is not permitted. If a user is believed to have used ChatGPT after the posting of this temporary policy, sanctions will be imposed to prevent them from continuing to post such content, even if the posts would otherwise be acceptable.

NOTE: While the above text focuses on answers, because that's where we're experiencing the largest volume of such content, the ban applies to all content on Stack Overflow, except each user's profile content (e.g. your "About me" text).


We have to decide the policy for our site for ourselves

At present, the Stack Exchange policy is to allow each site to craft their own response to ChatGPT and other AI Q&A.

This is the current word from on high (in response to a discussion titled Ban ChatGPT network-wide):

With due consideration, we've decided no general policy is necessary or helpful at this time. I want to be clear: I am not in any way intending to downplay the significance of ChatGPT, nor the disruption it has caused to the platform over the last few weeks.

Instead, we're going to stand by the comment I left on this post on December 5th:

While we evaluate, we hope that folks on network sites feel comfortable establishing per-site policies responsive to their communities’ needs.

Each site on the network is going to be impacted by ChatGPT (and its future iterations) in different ways. Of all the sites on the network, Stack Overflow was hit by far the hardest. However, we are measuring its impact both on Stack Overflow and across the network -- and, the impact of ChatGPT is currently diminishing everywhere. Some sites will see more or less activity on a given day, but outside Stack Overflow, it appears to be leveling off to a very slow trickle. On Stack Overflow, its usage rate is still falling quickly.

Because sites are impacted to such different degrees by the usage of ChatGPT, we encourage sites to create these policies as they become an issue. A blanket policy does no good if affected communities are not simultaneously developing the methods they use to combat the material problems they face. Instead, it risks being actively unproductive, by setting an expectation that sites will purge this content without giving them targeted tools to do so.

Our work internally progresses on identifying these posts and making our systems more resilient to issues like this in the future. We recognize that this is a shot across the bow, and the problem isn't going to go away in the long term. But for now, it seems we've weathered this storm mostly intact. As always, we'll reevaluate this decision in the future, if the circumstances warrant it.

And, of course, if any site experiences a volume of GPT posts that are cumbersome to manage, or a site needs any other support managing an influx of unwanted content, we are always happy to help apply the tools we have at our disposal.

This is a list of how other sites are responding.

7 Answers 7

24

This should be Law.SE's permanent policy. Posting a ChatGPT response (verbatim) as an "answer" is a lot like pasting a search engine result. But worse because it's obfuscated by (a) the wordy formula of ChatGPT responses and (b) the absence of a source.

If people want a search engine result they can use a search engine (or LMGTFY), and if they want a GPT answer they can go directly to a GPT engine.

This policy does not prohibit using ChatGPT to find or formulate an answer. It merely requires a user to review and edit the answer enough that it can't be readily identified as a ChatGPT response. This is analogous to our policy against link-only answers.

1
  • I think this should be revised as ChatGPT or other LLMs improve. These tools are going to take the jobs of a lot of lawyers, and be more qualified for evaluating legal issues than most in the future. Long term only the most qualified lawyers will be competitive with an LLM.
    – ZeroPhase
    Commented Apr 21 at 9:54
9

This needs to be broadened a tad: not just ChatGPT needs to be banned, but ANY AI generated answer. If we only ban one AI, others might flock to the breach, and as we know, Lawyers have thrown AI out of court and banned their use in any form — so should we.

1
7

Some thoughts:

  1. It isn't always easy to know what is and isn't ChatGPT so distinguishing it may be a practical issue.

  2. The main problem with ChatGPT is that it is optimized to be coherent and flow logically whether or not it is true. ChatGPT can often produce an answer that sounds right but is blatantly incorrect, or more subtly, is in a gray area and fails to identify the uncertainty present since it is prone to advocate for a position.

  3. This said, ChatGPT is not infrequently as correct as many of our less expert contributors and is often correct enough to be on the right track or to reference the right concepts.

5
  • 3
    Slippery slope if you start issuing bans for being 'prone to advocacy'. ;)
    – richardb
    Commented Feb 28, 2023 at 10:21
  • Given that SE's stated "overall" reason for banning ChatGPT is that "[the] rate of getting correct answers from ChatGPT is too low", wherever ChatGPT "is often correct enough to be on the right track" the ban would have no reason of being. Have you (or has anyone) identified on LawSE some actual instance of ChatGPT being used and being [rather] accurate? Commented Feb 28, 2023 at 15:30
  • 1
    LOL. Given the many absolutely garbage answers I saw over many years on SE (not on Law.SE but overall - even on sites that aren't prone to opinions like SO itself), using "ChatGPT can be incorrect" as a reason seems... wrong somehow.
    – user0306
    Commented Apr 6, 2023 at 19:17
  • @richardb, for an example of ChatGPT "being prone to advocate", see document 46-1 of Mata v. Avianca. A lawyer asked ChatGPT to argue for a position, and ChatGPT proceeded to provide legal-sounding arguments in favor of the position, despite there not being any actual law or precedent supporting that position.
    – Mark
    Commented Jun 30, 2023 at 23:15
  • Related law.meta.stackexchange.com/questions/1775/…
    – ohwilleke
    Commented Jul 24, 2023 at 0:04
4

This is an interesting decision, and I wish we had more information about who reached it and how.

As we move toward a permanent policy, I hope we'll be focused on broadly applicable principles.

Do we really want to ban content from a source simply because "the average rate of getting correct answers ... is too low"? I could offer a long list of users -- many prolific, some mods -- who fit that description. Do we intend to ban them, as well? If not, why the disparate treatment?

And who is making the determination that a problem exists to begin with? On StackOverflow, anyone can run the code and see if it works, but what about on law.SE? It's not at all clear to me that the gatekeepers here have the subject-matter expertise to reliably assess the quality of ChatGPT's answers.

2
  • 2
    "the posting of [incorrect] answers created by ChatGPT is substantially harmful to the site" as it "has effectively swamped our volunteer-based quality curation infrastructure" - so the problem is your volunteer-based quality curation infrastructure. Raise unrecorded votes to 1K, not 125 or w/e it is, because idiots (especially from the HNQ) going, oh, that was funny what you said there, plus one! was the original problem being ignored, and why we find ourselves defeated by a computer. +1.
    – Mazura
    Commented Mar 18, 2023 at 21:37
  • 2
    SE doesn't give a flying fig about garbage content being upvoted from HNQ. I raised the issue - with EASY and actionalble solution - years ago on main Meta.
    – user0306
    Commented Apr 6, 2023 at 19:19
4

I think the policy as stated in the headline to this thread is rather too broad.

I think that using the output of ChatGPT, or indeed any similar AI, as an answer, without significantly revising it, and in particular, without clearly stating that it comes from ChatGPT, falls under the long-established policy against plagiarism, and should be banned on that basis.

I think that one who takes an output from ChatGPT, checks it, and provides supporting sources, and clearly indicates the process that was used, including the origin of the answer with ChatGPT, should be allowed to post that, provided that it is clear that any such poster takes responsibility for the accuracy of such an answer.

But what I think is a more useful case, and one which I have seen on another SE site, is this. A would-be poster P submits a question to ChatGPT, one that would be on-topic here. P gtrs an answer. P thinks that the answer is incorrect, but is unsure. P posts a new question here, saying:

ChatGPT says that in circumstance A the law is B. I think that may be incorrect, for reasons C, but I am unsure. What is the law really in case A?

I think that posts of that sort should be allowed, even encouraged. And beyond that, the use of ChatGPT (and similar AIs) to formulate questions should not be banned, provided that the use of ChatGPT is properly disclosed.

1
  1. It is not easy to identify GPT-generated answers, but I think the current upvoting-downvoting should be enough to differentiate helpful answers and non-helpful answers.

  2. Users who use AI-assisted writing should clarify the usage.

  3. All substantial claims should be accompanied by a source.'

  4. Not only GPT-generated answers, but also the questions and comments, should be banned.

2
  • Points 2 and 4 seem contradictory.
    – user35069
    Commented Jul 4, 2023 at 13:56
  • 2
    @Rick AI-assisted and GPT-generated is not entirely the same, though.
    – dodo
    Commented Jul 4, 2023 at 16:06
-2

Although well intentioned, the effectiveness of a ban is highly doubtful. More important is that the ban overprotects those who are too lazy to think for themselves.

The masses take at face value the consensus or official narrative no matter how inaccurate it is. Only few people truly realize how that habit perpetuates the worst of the evils of our civilization. And masses's blind reliance on a consensus is oftentimes reflected even in SE's voting system. Indeed, many people's voting is driven by the trend of votes they notice on a post or by the authors' reputation score, not by the accuracy and quality of a contribution. Occasionally mods themselves have pointed out how people tend to vote "the easy stuff".

Especially in the [Mis-]Information Age, adults are responsible for being judicious about information to which they are exposed. When the topic of a post is not trivial some of us provide sources in our answers to facilitate corroboration. We try to make our answers intelligible to people with little or no background. But bans like the one at issue are a poor substitute for people's personal duty to exercise their discernment.

Also it is unclear to me how far-fetched this scenario is: Assume Oliver Wendell Holmes comes back to like (yes, this premise is utmost far-fetched) and becomes a prolific contributor on LawSE. An AI system gets trained on those contributions, internalizes Holmes's writing style, and creates output that is indistinguishable from writings by Holmes. Are Holmes's contributions at risk of getting banned for their resemblance with the AI output?

I would never delegate to AI tasks which keep me in intellectually shape, so to speak, by performing them by myself. Therefore, I take no strong position on the matter. But an SE's ban on anything that looks like AI-output will sooner or later lead SE to playing catch-up because the anti-detection features of algorithms just keep improving.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .