7

This is just a hunch, and people more familiar with newcomer behavior might be better qualified to judge this:

User "Mike Song" joined today and already wrote answers to a number of pretty old posts. These answers somehow smell of being generated by LLMs, although I would not be able to pinpoint exactly what makes me suspicious about them (maybe the perfect spelling is an indicator). What do others think? Just a prolific and interested writer? Or really a bot user (although I would have no idea why one would want to do that)?

11
  • 1
    There are certainly many bad, meandering answers, rapidly posted. They also seem to get stuck in some small area, and just keep spewing text about that one small area. Those both seem like characteristics of GenAI LLMs to me, but most of my experience with those are on math.se, where they usually state obviously incorrect "mathematical facts", so it's harder to be certain here. I'd vote "yes, probably", and would appreciate others also checking them out.
    – JonathanZ
    Commented Jun 18 at 13:23
  • 9
    I don't know that it's particularly appropriate to discuss whether a particular user is breaking the rules in public on Meta - this could affect them quite negatively if you're wrong (if people go and downvote their answers). It's probably more appropriate to instead flag one of their posts and bring up these concerns for a moderator to look at. If a moderator sees this, they could maybe confirm or clarify the appropriate process to follow for such things. (At the time of writing, I haven't looked at any of their posts.)
    – NotThatGuy
    Commented Jun 19 at 0:11
  • I think ur right. I guess he's probably not just writing with AI but even trawling philSE with some automated aid
    – Rushi
    Commented Jun 19 at 8:11
  • @ScottRowe: Because most humans do not think logically before writing, not to say speaking. Witness how many people believe in Choprawoo, and how quickly they forward fake news.
    – user21820
    Commented Jun 25 at 11:20
  • 1
    @user21820 so if we see logical writing, that would be a big tipoff!
    – Scott Rowe
    Commented Jun 25 at 12:13
  • @ScottRowe: No. Chatgpt cannot think logically either. That's why we can't easily distinguish its writing from a lot of human writing. If you see completely cogent logical reasoning, not of the type that could have been already somewhere on the internet, then you know it's not chatgpt.
    – user21820
    Commented Jun 25 at 14:35
  • @ScottRowe That's not a philosophical question. (I don't think it's interesting, either.) "so if we see logical writing, that would be a big tipoff!" -- no, you have the logic completely wrong. You asked why we can't make the distinction, and the answer was about the lack of a difference. (FWIW, I don't happen to agree that "you know it's not chatgpt".)
    – Jim Balter
    Commented Jun 28 at 6:43
  • "I don't know that it's particularly appropriate to discuss whether a particular user is breaking the rules in public on Meta" -- I know ... it's obviously not.
    – Jim Balter
    Commented Jun 28 at 7:27
  • @JimBalter I bet ChatGPT would not say "I don't know that..." when it is asserting something. It's not a Philosophical question, any more than the Turing test is.
    – Scott Rowe
    Commented Jun 28 at 11:36
  • I have some suspicions that a few users here may be AI bots. There are a lot of researchers trying to see how well LLMs can pass as humans in many contexts and this kind of site would seem like a good target. The problem is (as I think you have shown) that, we risk insulting someone by accusing them of either using AI or being a bot. This seems like an inherent weakness. So far, I personally have chosen to avoid that risk.
    – JimmyJames
    Commented Jul 1 at 19:28
  • I can't make sense of Scott Rowe's comment to me. No one accused NotThatGuy (who said "I don't know that ...") of being ChatGPT, and it was Scott who asserted that something was a philosophical question in one of his now deleted comments that I referenced. And the Turing Test isn't any sort of question, but it was proposed in a paper called "Can Machines Think?", which is a philosophical question. (As is the related question "Do LLMs think?")
    – Jim Balter
    Commented Jul 7 at 7:38

1 Answer 1

4

This question is very simple. I have written a lot about philosophical topics before. When I first came to stackexchange yesterday, I first searched for tags and topics that interested me, and then made some modifications and additions to my previous writing and published them below these topics. There are also about two or three topics that I write in real-time.

I don't know if this behavior complies with the regulations of this website?

Because these are all words I wrote before, they may not fully match the questioner's question in terms of thinking, but I have tried my best to make choices and modifications. If it still makes you feel that I am doing this with some special purpose, then I am very sorry.

Of course, due to limited professional knowledge, many of my writing may not be very professional. If it is deleted for this reason, I can accept it.

Regarding "perfect spelling", that's because I don't usually write in English. I have already written it and then translated it using translation software. I wonder if I express myself clearly?

6
  • Mike, I apologize for coming to this conclusion with far too little evidence, and I'll be a lot more cautious in future. If it's ok with you, I will delete this post. Commented Jun 20 at 6:13
  • 1
    @Hans-Martin Mosner There is no need to delete it. This post can serve as a reminder for new users to avoid unnecessary misunderstandings caused by the same mistake. Thank you for listening to my defense.
    – Mike Song
    Commented Jun 20 at 6:44
  • 3
    Hey there, I have been a bit sceptical regarding the AIgen suspicion because some things did not sit right with me. Your posts seem to be based on extensive reading and some background knowledge, thus I would ask you to add references to specific texts where possible. This would greatly improve the utility for other users and thus the quality of your posts in the context of StackExchange demands.
    – Philip Klöcking Mod
    Commented Jun 20 at 19:39
  • 1
    @Philip Klöcking I'll try my best, but I don't think it's necessary to indicate the source of some widely known concepts, such as Descartes' dualism of mind and body, Kant's theory of prior apperception, and Heidegger's theory of existence. Do you think so? Additionally, I believe that this website should be more inclusive of personal perspectives, as there is never a standard answer to philosophical questions. Philosophy has always been at the forefront of our understanding of the world, and there will inevitably be huge differences, which has always been a tradition.
    – Mike Song
    Commented Jun 21 at 3:01
  • 1
    @Philip Klöcking Philosophy is not computable and verifiable like disciplines such as mathematics and physics. Any existing philosophical theory is initially a private viewpoint of philosophers, and only becomes a public idea after gaining widespread recognition. However, even so, these philosophical theories may not necessarily be true. I believe that as long as the respondent has their own complete argument, which is based on scientific research results and obvious common sense, and the argumentation process is relatively rigorous, it can become a reference answer for the questioner.
    – Mike Song
    Commented Jun 21 at 3:03
  • @Hans-MartinMosner I just read Mike's answer about acting being lying and it doesn't read anything like what LLMs produce. So yeah, this should serve as a warning about hasty baseless incompetent charges against people, posted publicly yet (sheesh).
    – Jim Balter
    Commented Jun 28 at 7:21

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .