Skip to main content
13 events
when toggle format what by license comment
Jan 4, 2023 at 0:13 comment added uhoh So perhaps a meta question like "Are 'Chatterbot says X, is it true?' questions OK here?" better defines my concern, and an answer like "Well, if they don't get out of hand in terms of a daily rate and we can edit them and successfully discourage the OP from asking a dozen more like it, yes. And if they get out out of hand, then we can change to no later." would work for most people including me.
Jan 3, 2023 at 23:58 comment added uhoh I'm sure we agree on that quite nicely, but since chatterbots can generate confusion almost algorithmically at kHz rates, I am concerned. Because it takes only seconds to ask a chatterbot a one-sentence question, then to post the output in an SE question followed by "Is this true?" I think that that kind of question needs an a priori consensus that it's insufficient. Otherwise it can quickly get out of hand. Thus I've asked here for some agreement about a temporary ban. To quote Melissa McCarthy (and make fun of myself) "The ban is not a ban."
Jan 3, 2023 at 12:52 comment added Dragongeek @uhoh at the end of the day, the site is here to answer questions and clear up confusion, and I think that while people shouldn't be using chatGPT to educate themselves, being confused about something and coming to Space.SE for an explanation is a perfectly valid reason and something that we want to encourage.
Jan 3, 2023 at 12:49 comment added Dragongeek @uhoh I don't see why you can't edit the question in that specific case--I would classify it under "needs improvement" regardless of policy on chatGPT. Changing the question to be "I have heard conflicting explanations that [X] and would like to know exactly how thrust is produced in Ion thrusters" is a perfectly fine question and, although it shows a lack of own-effort research, I don't think chatGPT is the issue here. I would recommend a similar edit to any other question where paragraphs of extra and superfluous background were given, regardless of data origin
Jan 3, 2023 at 11:37 comment added uhoh In this recent question the OP begins " have read many times that..." but the only source is a pair of conflicting chatterbot outputs. I'd like to be able to edit the question and delete the "chatterbot sez..." and link to our policy as an explanation.
Jan 3, 2023 at 11:25 comment added uhoh Yes, there are folks who will likely argue that chatterbot output is a sufficiently good source upon which to base a "chatterbot sez.." question, since it at least derives (in some way) from material that includes facts. I think it is important to at least temporarily ban it and make it clear that chatterbot output is worse than monetized weird pseudoscience YouTube channels and should not be the basis of questions. I guess I feel that needs to be explicit, we can't just assume everyone will automatically view it the same. We're in an era of norms and precedent violation; explicit is good.
Jan 3, 2023 at 10:27 comment added Dragongeek @uhoh Is chatbots being sources even a discussion? They aren't cite-able, accountable, or even (currently) legally capable of creating intellectual property. I don't think there's a real argument here for saying they are
Jan 3, 2023 at 10:23 comment added uhoh That sounds workable only if one then spells out that chatterbots don't count as sources, which brings us back to the simpler temporary ban on chatterbot content.
Jan 3, 2023 at 10:21 comment added Dragongeek @uhoh Rules in general are much easier to implement than get rid of, so I want to avoid "shooting from the hip" when it comes to creating new rules that will probably stay. If we do want to create a new rule that aims to curb chatGPT, I would advise something like "Answers must cite sources" as this rule would not only filter out chatGPT answers but also increase the user experience by raising the quality of answers.
Jan 3, 2023 at 10:19 comment added Dragongeek @uhoh When new rules are sufficiently necessary, sure, implementing them should be carefully considered, but I don't think this is the case here. Adhering to present moderation guidelines and site/answer rules, moderators already have no issue "legitimately" deleting chatGPT posts, and I don't see how making an explicit "no chatGPT" rule would make the job of moderators easier or increase the user experience.
Jan 3, 2023 at 10:06 comment added uhoh Those can be 100% mindlessly, algorithmically generated and posted and an arbitrary rate. This is different, new. Shouldn't rules and restrictions adapt as technology develops?
Jan 3, 2023 at 10:04 comment added uhoh this might be related to this recent question which purports to use chatbot generated material as prior research, or a semi-authoritative source. I was careful to ask this question about "chatbot generated content" to include both question and answer posts. About "more rules and restrictions", what are the current rules and restrictions that address the potential slew of new questions of the form "ChatBot says 2+2=5 can be the basis of a new religion/mathematical formalism/science fiction story. Is that true?"
Jan 3, 2023 at 9:46 history answered Dragongeek CC BY-SA 4.0