Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

9
  • 1
    Upvoting for your second point; I think the wiki really needs to at least briefly touch on the use of AI editing tools. I don't think it should necessarily go as far as specifying they're allowed or disallowed as much as it should be something like "use caution, the output is your responsibility to verify".
    – zcoop98
    Commented Feb 8 at 0:24
  • 3
    One of the issues with AI editing and translation tools, is that they are not merely fixing grammar or translating, they are rewriting content in a way that follows general AI heuristics. This makes it harder to detect between post that is written by a person and then run through such tool or completely generated. It is also used as frequent excuse by people who were definitely using AI for generating whole post (because of other visible heuristics) claiming they were just using it for translation. As long as we have people running rampant with posting AI, I would prefer those are banned, too Commented Feb 8 at 7:27
  • 1
    @ResistanceIsFutile Which tools are you using? That has not been my experience with either tools specifically designed for editing or using tools like ChatGPT and Bard for translation and editing tasks. Commented Feb 8 at 11:03
  • @ThomasOwens If you instruct AI to merely translate and fix grammatical errors, it may follow your instructions and usually you can get what you asked for, without it being too intrusive. But even then it may be recognizable as AI. However, the main problem is that people are commonly asking for improvement of their text and that results with rephrasing which is definitely leaks more AI traits and there are tools like Grammarly (which now use AI all over the place) and they also do more than simple fixes. Commented Feb 8 at 11:24
  • Now, for you and me, using any of those tools with care might not be a problem as we can recognize problematic parts, but for users that really need those because of their poor English or general writing skills, saying that such tools are allowed might get them in trouble as there is a high chance that such posts will be recognized as AI. Commented Feb 8 at 11:27
  • 1
    @ResistanceIsFutile I agree that there are risks if someone uses these tools, so saying that they are outright allowed doesn't make sense, so that's not what I'm asking for. I just think that we need clarity on integrating AI-generated content into human-generated content and making those lines a little more clear. Software Engineering has prohibited generative AI content, but paragraphs 2 and 3 both represent real issues that show how AI content can fit into communities. Commented Feb 8 at 11:42
  • I agree that we need clarity on what is allowed and when. I mean in some circumstances and on some sites, you would actually need to post part of some AI generated content as example or output and results you are having, obviously such usage should be allowed. But, having quoted AI in sense that person answering used AI to directly provide parts of the answer that we would otherwise expect to be written by person, even if that AI is quoted is definitely not something that sits right with me. Commented Feb 8 at 14:16
  • 1
    It seems Grammarly isn't just tweaking text nowadays. Commented Feb 11 at 2:44
  • @RebeccaJ.Stones Interesting. I knew they had AI tools for tone and other improvements, but not the full-up prompt-based generation. I just installed the plugin, since the free version has it. The improvements that take the full context of the text are nice and seem safe (in my limited testing), but the generative aspects are worrying. So it may depend on which aspects of the tool you are using. Commented Feb 11 at 11:14