Skip to main content
21 events
when toggle format what by license comment
Aug 3, 2023 at 1:51 comment added Mark Johnson @This_is_NOT_a_forum When thoroughly checked (with possible adaptions/corrections/additions) then it becomes a product of their own mind. When it is blindly copied and pasted and posted as if it is their own product, then it is plagiarism. The problem is that many of these 'authors' don't have the faintest idea (or worse care) if 'their' answer is factualy correct or a chapter of history from alternative-reality-1527 but submit it anyway to gain reputations. That is the problem in my mind.
Aug 3, 2023 at 1:15 comment added This_is_NOT_a_forum ChatGPT can be useful, just not for anything factual (or at least it must be thoroughly checked). For instance, to come up with input for a regular web search.
Aug 3, 2023 at 1:11 history edited This_is_NOT_a_forum CC BY-SA 4.0
Active reading [<https://en.wiktionary.org/wiki/industry#Noun> <https://en.wiktionary.org/wiki/bookkeeping#Noun> <https://en.wiktionary.org/wiki/analysis#Noun>]. Added some context.
Jul 29, 2023 at 16:45 history rollback Mark Johnson
Rollback to Revision 3
Jul 29, 2023 at 12:21 comment added tripleee If you are discussing ChatGPT in particular, I think it's fair to assume that you should be familiar with the central terminology, especially as this meaning of the abbreviation LLM has become almost a household word in recent months.
Jul 29, 2023 at 12:12 history edited tripleee CC BY-SA 4.0
Remove weird digression about friends who received the email
Jul 29, 2023 at 10:35 comment added Mark Johnson @tripleee There are many usages of LLM - Wikipedia as an abbreviation. That is why it should be written out when first used.
Jul 29, 2023 at 10:19 comment added tripleee That's a weird backronym. Generally, LLM stands for Large Language Model.
Jul 28, 2023 at 19:15 comment added Mark Johnson @Mark More the reason that LLM (Logic Learning Machine) should not be the source for answers based on knowlage. If it cannot state either: 'Yes,an answer is possible ' or 'No, an reliable answer is not possible' then it is neither logical nor intelligent.
Jul 28, 2023 at 18:52 comment added Mark @MarkJohnson, LLMs aren't deliberately designed to not say "I don't know". Rather, the inability to say "I don't know" is an unavoidable consequence of how they work: LLMs provide the text that, based on the training data, is most likely to follow the input. The only way to get an "I don't know" out of one is to provide it with a list of things that it doesn't know.
Jul 28, 2023 at 16:02 comment added Mark Johnson @OldPadawan Looking at International Monetary Conference of 1867, shows quickly that it took place in Paris and not in Moresnet as claimed. That is the danger in my mind, since it makes it more difficult to check if the result is plausible.
Jul 28, 2023 at 16:01 comment added Mark Johnson @OldPadawan No, for them it is considered a source of information. When rephrased (What major conference took place in 1867?) correct results were returned. But adding 'Neutral Moresnet' to the question an incorrect result was returned. Those familiar with the events (or do cross checking as the OP did) recognise this. But some peaple don't, assuming (incorrectly) that only correct information is returned. The lack of sources makes it more difficult. ...
Jul 28, 2023 at 15:50 comment added Sarvesh Ravichandran Iyer Thank you very much for your informative answer. It was a pleasure to read.
Jul 28, 2023 at 15:41 comment added OldPadawan "For the gullible, ChatGPT and Co. are a dangerous source of information." -> did you mean "misinformation"? :)
Jul 28, 2023 at 12:44 history edited Gert Arnold CC BY-SA 4.0
spelling
Jul 28, 2023 at 11:15 comment added Mark Johnson @Cerbrus As stated it was mainly for the benefit for Stack Exchange, Inc. to assist in understanding why the moderators and some users are reacting this way. Understanding the reason why, often helps in resolving a problem. That was my intension of adding what I actually wrote to others this morning.
Jul 28, 2023 at 11:00 comment added Cerbrus Yes, in the first 3 paragraphs... But the rest of the answer? It kinda lacks focus. Again, I agree with your points, I just think this is not quite the place.
Jul 28, 2023 at 10:43 comment added Mark Johnson @Cerbrus The simple non acceptance by the Stack Exchange, Inc. that AI generated answers is plagiarism seems to me to be a major, justified, cause of the strike. That the moderators don't use (as implied) the AI tools, with the reason why, is also stated.
Jul 28, 2023 at 10:20 history edited Laurel CC BY-SA 4.0
deleted 3 characters in body
Jul 28, 2023 at 9:50 comment added Cerbrus There's a lot in this answer, but a lot of it is also not a comment on the strike. I think this answer is better suited to a question that discusses the ban on LLM content... While I agree with the answer, it does not seem to answer this question.
Jul 28, 2023 at 8:49 history answered Mark Johnson CC BY-SA 4.0