Wikibooks talk:Artificial Intelligence

From Wikibooks, open books for an open world
Latest comment: 31 minutes ago by Panic2k4 in topic Summary and consensus
Jump to navigation Jump to search

Suggestions

[edit source]

Thanks for starting this draft policy! I was the one who started the thread on the Reading Room/Assistance page. I'm not really a contributor to wikibooks, so I'm not going to say what does or does not make sense for this project, but I did have a few comments on the draft.

Prompt attribution:

Which prompt(s) were fed into the tool (e.g. "Make a graph to show the change of the price of pomegranates from 1980 to today" or "Generate an image of bebop as a person").

I assume this has something to do with reproducibility.

  • I wonder how useful this will turn out to be, as to my understand Chat GPT responses aren't reproducible like that, that is, if someone else comes along and gives them the same prompt they will get a different output.
    • I have seen marketing material for Office365. This claims that it will trawl through the personal content in the same account, in order to "ground" prompts to the AI, which will make the AI result less reproduceable across accounts. Do we want to seek the "grounded" prompts to the AI? Whether this is possible or subject to RFC 6919 part 1 would depend on whether the tools make grounded prompts visible to the user. Jfw01 (discusscontribs) 23:10, 10 April 2024 (UTC)Reply
  • This will be compounded if it requires multiple iterations of a prompt to generate satisfactory copy (e.g. please write a section about X ... Please make sure to mention Y ... No, not like that, this bit is wrong ... &c), as the responses would diverge.
  • This might only be practical for the use of LLMs for very discrete sections of articles. I can imagine a use case where someone might want to give an entire page to Chat GPT and ask it to update it, for instance.

Moderation:

Should there be some suggestion as to what amount of LLM generated content is suitable per wikibook, or as to the amount of curation per word of LLM generated text?

There may be an unintended consequence of an open policy of LLM generated text that soon there are thousands more wikibooks all produced by asking Chat GPT 4 "Please write a 50,000 word wikibook on [the history of aviation, the life and times of Sir Walter Raleigh, Fermat's last theorem...]". If so,

  • The existential question: is there any point to wikibooks if it is just a step removed from asking Chat GPT for something? What is the value-add?
  • The quality control question: LLM could potential create far more content far more quickly than wikibooks current deals with and can edit, fact check, and so on.
  • The copyright question: it seems to be possible for LLMs to plagiarise or copyvio other sources. The more words produced by them--whether that's 100,000 words spread across 1,000 pages or 100,000 words in one wikibook--the more likely there are copyright issues that are more likely to go undetected by the editor, because they did not **themselves** copy the text.

Perhaps I am just catastrophising and this is in fact the beginning of a brave new world, but these were some issues that I have been wondering about--it does make me wonder if saying, "you're not allowed to use LLM generated text at all" might be easier, if completely unenforceable, or, "you may only use LLMs to edit or provide suggestion on original text you have produced." JCrue (discusscontribs) 22:31, 12 February 2024 (UTC)Reply

I am not assuming that it's reproducible (as you point out, it's not), but this keeps us from having more-or-less useless declarations like "made with AI". Plus, there could be meaningful licensing differences between tools and when and how they were generated. E.g. a model may have been fed copyrighted material up until date [x], but they stopped doing that on date [y], so any attribution or copyright issues could be irrelevant or drastically different.
To answer your questions:
  • Re: your first point, you have already answered your own question. Precisely because asking LLMs the same question gets you all kinds of answers, having a deliberate, vetted, reliable textbook that includes multiple sets of eyes on it is very different from the "this is just whatever MidJourney spits out today" kind of answer you get from these tools as of now.
  • This is why I included a clause about how just publishing whatever you get from ChatGPT without first editing it is inappropriate and blockable. Someone could come along and spit out GBs of trash that seems kinda/sorta reasonable. It's really important to not let anyone publish a flood of AI junk.
  • This is the Wild West part and where I as a non-lawyer can't give any trenchant perspective.
Great points: glad you brought up the issues. —Justin (koavf)TCM 01:12, 13 February 2024 (UTC)Reply
Thanks for the suggestions @JCrue! Regarding moderation, I feel like a lot of what you mentioned is covered by the stipulation that "LLMs may not be used to generate original material at Wikibooks" and can only be used as an assistant using the provided guidelines. In this way, an LLM would not be able to write a book or page or even paragraph on its own (i.e. it can't write prose), and in theory this would prevent massive contributions. Regarding copyright, I think we'll have to wait and see how things hash out in the legal realm and in bigger communities than we have here. What do you think? —Kittycataclysm (discusscontribs) 02:58, 13 February 2024 (UTC)Reply
Would prefer a total ban on the use of AI with regards to Wikibooks. Maybe in a few years when it is actually capable of producing useful text but it certainly is a long way from that point right now.--Xania talk 05:07, 23 February 2024 (UTC)Reply
Xania is correct. Two of the three coding/programming sites I use have an outright ban on ANY use of "AI" in code or documentation creation. If we believe that "precision of language" and integrity of content is important, we must not allow tools with unknown bias to change the meaning or tone and present it as being true and correct. Eggiejw (discusscontribs) 21:24, 26 February 2024 (UTC)Reply
These are valid concerns! To make sure I understand, it seems to me that @Xania and @Eggiejw are against LLMs generating content itself—I agree with this stance for all of the reasons stated, and the policy as it stands does indeed reflect this. However, I have seen people advocate for another use case for LLMs as tools to suggest things like structural outlines—in these circumstances, the LLM is generating almost no actual finished publishable content, and the risk is thus extremely low. While I would not personally use a LLM to suggest an outline for a book/chapter, I don't see a huge risk inherent to editors doing so, especially if they are vetted by a human and if the finished content itself is not LLM-generated. —Kittycataclysm (discusscontribs) 13:53, 31 March 2024 (UTC)Reply
Yeah, I don't see why people using AI for ideas or outlines is such a bad idea. ForTheGrammar (discusscontribs) 02:02, 3 April 2024 (UTC)Reply
As a necessary precaution in the pursuit of quality there needs to be a zero-tolerance policy for LLM output in Wikibooks, at least as far as "content" goes. LLMs deal with plausible sentence construction, not fact aggregation, and the books deal with creativity for fiction and facts for nonfiction. LLMs cannot help in reaching either goal. So the official suggestion as to what amount of LLM generated content is suitable per Wikibook should remain zero, zilch, none. When new models come out, then the question needs to be revisited. However, currently we have only LLMs which are grammar engines and have nothing to do with fact, so LLMs should not be used here. Larsnooden (discusscontribs) 08:39, 8 April 2024 (UTC)Reply
I have seen what I think is a type fault with LLMs. There are caracteristic spelling errors in English, made by people who have previously learned another language. Given a question with such a pattern of spelling errors, ChatGPT treats the writing as a separate genre, and gives back spelling errors in the same pattern, which will be unobvious to the writer of the question. I think that there might be able to be a similar effect, around gaps in coverage in a book outline. So, if using an AI to write an outline was a bad idea, then the effect would be to make someone equally wrong, while looking more plausible and being more confident. Jfw01 (discusscontribs) 23:20, 10 April 2024 (UTC)Reply
These are interesting considerations—thank you @Larsnooden @Jfw01! It seems like you're saying that with LLMs, the quality of the input determines the quality of the output. In this scenario, even if someone is just generating something like an outline, a poor-quality prompt could generate a poor-quality outline that negatively impacts the scope of a potential book. Did I get that right? —Kittycataclysm (discusscontribs) 00:17, 11 April 2024 (UTC)Reply
Close. The output quality of LLMs will be factually deficient regardless because the quality of the prompt has no effect on the creativity or, more importantly, the factual accuracy. Facts don't enter the picture with LLMs. As language models, they deal with grammar and produce sentences. Whether those sentences overlap with fact, is pure chance. LLMs could be misused as grammar checkers, but the result would simply be grammatically correct, plausible sounding sentences, chock full of errors or outright falsehoods. The current landscape is dominated by LLMs which are not appropriate for legitimate uses. They can be used to churn out disinformation and chaff at scale, but that goes under the category electronic warfare and not under that of writing electronic books. LLMs are a small subset of AI, however, and things will change over time but not soon. (PS. Most early evaluations gushing over LLMs in recent years appear to have been subject to the Gell-Mann Amnesia Effect, probably due to the apparent confidence in the sentences.) Larsnooden (discusscontribs) 03:43, 11 April 2024 (UTC)Reply
In general, that poor quality prompts often lead to poor quality output that reinforces the poor quality. This is under the assumption, which may be counterfactual, that there are conditions under which there is good quality output. Jfw01 (discusscontribs) 11:23, 14 April 2024 (UTC)Reply

Suggestions for Policy Improvement

[edit source]

I asked PizzaGPT "How can this policy be improved?" among other (in my opinion non useful) things, it suggests: "Provide examples or case studies demonstrating best practices for using AI-generated content on Wikibooks. This can help editors better understand how to effectively incorporate AI tools in their writing and editing processes while minimizing potential risks."

Then I asked also Bing / Copilot:

Suggestions for Policy Improvement by Bing / Copilot

[edit source]
  1. Definition of AI: The policy could benefit from a more precise definition of what constitutes "artificial intelligence" or "large language models". This would help users understand exactly what types of tools are being referred to.
  2. Risks: While the policy mentions that AI-generated content can be unreliable or incorrect, it might be helpful to provide more specific examples of the types of errors or issues that can arise.
  3. Guidelines: The policy could provide more detailed guidelines on how to use AI tools effectively and responsibly. For instance, it could include best practices for vetting AI-generated content or tips for crafting effective prompts.
  4. Ethical Considerations: The policy could address ethical considerations related to the use of AI, such as issues of bias or the potential for AI to be used in ways that violate community norms or standards.
  5. Review Process: The policy could include a process for reviewing AI-generated content before it's published, to ensure it meets Wikibooks' standards.
  6. Disclosure Requirements: The policy could require users to disclose not just the tool used and the prompts given, but also any modifications made to the AI-generated content.

Remember, these are just suggestions. The final decision on how to improve the policy should be made in consultation with the Wikibooks community.


Laurusnobilis (discusscontribs) 11:47, 23 February 2024 (UTC)Reply

Use of LLMs for this policy (and evidence of issues)

[edit source]

I noticed that @Laurusnobilis actually used LLMs to edit this policy in Special:Diff/4373481 and Special:Diff/4373482. While reviewing these edits, I noticed some key issues that are actually very relevant to the development of the policy. In particular, the LLMs did the following:

  • Changed words such that the meaning was different;
  • Changed words and syntax such that the formality/tone was changed in a way that may have been "fancier" but harder to understand for all users of the project;
  • Introduced grammatical/punctuation/spelling errors and made some sentences harder to read.

I had to go back and individually revert a lot of these issues, and I think this provides an excellent case study in the limitations of LLMs here. I'd also like to propose the following:

  • Could we maybe not use LLMs to help write the policy on LLMs? It strikes me as inappropriate.
  • Inclusion into the policy that edits by LLMs containing significant issues can be quickly reverted—I'm concerned about the project having to waste time individually going through and correcting errors inserted by the use of LLMs.

Kittycataclysm (discusscontribs) 13:21, 23 February 2024 (UTC)Reply

Please note that every change suggested by the LLM was manually checked by me, that only a part of the suggested changes were included, and that English is not my native language.
Laurusnobilis (discusscontribs) 15:08, 23 February 2024 (UTC)Reply
This is good to know—thank you. I will note, however, that this in itself does speak to a significant limitation of LLMs that we should take into account when developing the policy. The draft policy currently states: "LLMs should not be used for tasks with which the editor does not have substantial familiarity, and their outputs should be rigorously scrutinized for compliance with all applicable policies. Editors who are not fully aware of the risks associated with LLMs and who are not able to overcome the limitations of these tools should not use them to write books." Because they cannot properly check the output of the LLM, I am not convinced that non-native speakers of a language should use a LLM to make language-based edits—we have just seen that this can cause issues. —Kittycataclysm (discusscontribs) 19:15, 23 February 2024 (UTC)Reply
In my opinion, "the policy that edits by LLMs containing significant issues can be quickly reverted" should not depend on whether the changes were generated by a LLm or a human.
Laurusnobilis (discusscontribs) 15:19, 23 February 2024 (UTC)Reply

Copyright?

[edit source]

Wikipedia has an essay about LLMs violating copyright, maybe we could incorporate some parts of this into the policy.

96.5.246.62 (discuss) 19:32, 23 February 2024 (UTC)Reply


If training data of LLM violates copyrights and violation can be detected in the future, but not now. Think of a specific wikibook created with GenAI and community works collaboratively to improve that Wikibook and spends hours and hours on derivative work on that WikiBook. Later on all working hours lost due to the fact that the WikiBook has to be removed due to the copyright violation hidden in the used training data of LLM. Maybe it is worth to include that in consideration - could be very frustrating event for the community members. --Bert Niehaus (discusscontribs) 19:01, 28 March 2024 (UTC)Reply

I agree—that would be frustrating! But it does beg the question: would a book have to be removed simply because the training data of an LLM itself violates copyright? The policy as it stands currently bans LLM-generated content—instead, LLMs could only be used as advisors or for making suggestions. This fact should effectively eliminate the risk of copyrighted content being inserted by an LLM. Thoughts? —Kittycataclysm (discusscontribs) 14:12, 31 March 2024 (UTC)Reply
I must admit that I'm not much into this subject, but wouldn't it be better to use AI detectors to find out how much % of a specific text was AI-generated or partially so? I don't know if any of these tells which % (approximately, of course) of the content is human-generated or not, and even if there are any open source tools of this kind (having in mind that all or at least great part of the tools in use on Wikimedia have to be open source and free) or their accurate (I think that the most reliable one in this point is Copyleaks). Anyway, many of these (and, I suppose, the better available) are currently paid ones. And we would still need to decide how much % of the content have to be human-made to not cut it off as intrinsically copyright-violating (and, as exposed in this essay, since LLMs have no personhood and do not have copyright of their own, so we need to verify if it belongs to the company which created it or to the sources of the generated content, and these, due to the fact that AI can summarize thousands of texts in seconds, are probably hard to identify) and standardise (or even create) the detector Wikimedia would use for such. Anyone here understand more about this topic? Magi129 (discusscontribs) 14:57, 29 April 2024 (UTC)Reply

Global discussion

[edit source]

Has there been an attempt by the Foundation or the global community to define a policy for AI? Leaderboard (discusscontribs) 11:18, 24 February 2024 (UTC)Reply

Good question—as far as I've been able to find, there isn't any global policy discussion. Various essays and other non-policy pages I've found are as follows:
For the moment, it seems up to individual projects to develop their own policies.
Kittycataclysm (discusscontribs) 15:58, 24 February 2024 (UTC)Reply

Template

[edit source]

I really like this policy, but I do have a question. When would the template be used? A human is vetting the information already before publishing, right? Is this the sort of thing that needs multiple eyes?
Thanks, I can do stuff! (talk) 05:45, 25 February 2024 (UTC)Reply

How I imagine it is that it's just placed in a relevant piece of content on Wikibooks and is published at the same time as said content, without requiring anyone's editorial oversight. —Justin (koavf)TCM 21:58, 26 February 2024 (UTC)Reply
Okay, but would it just be removed once a different human has gone over it? If so, is the original prompter not enough? I thought the policy required the human prompter to go over the AI's materials before putting them on Wikibooks.
Thanks, I can do stuff! (talk)
That's not how I conceived it, but the policy is not finalized. —Justin (koavf)TCM 23:59, 26 February 2024 (UTC)Reply
Sorry, I didn't see your message. How did you conceive of it? I can do stuff! (talk) 22:36, 4 April 2024 (UTC)Reply
This is a good question and something I've also been thinking about! One issue I can see with blanket-classifying a page as having AI influence is the question of when the template ultimately gets removed (for how long is it applicable, ship of Theseus-style). I think it's definitely important to have disclosure permanently live on the talk page and in edit histories. And, I think a human should thoroughly vet everything BEFORE publishing. So, I'm now wondering how useful a generic disclosure template would even be on the primary page. —Kittycataclysm (discusscontribs) 01:12, 27 February 2024 (UTC)Reply
My Two Cents is that the template should function like the pending changes system. I don't know how everything is on the backend, but some sort of flag that the person may have used AI may be useful, but people can write trash anyway, so specifically AI shouldn't be called out with a banner or template or whatever outside of the sending changes banner and something connected to it.
I can do stuff! (talk) 01:32, 27 February 2024 (UTC)Reply
I approve of an AI-generated template. I think that it should not be removed. I am contemplating one tag with labels for the last date when AI content was added, and the last date of human review. It might be that these tags end up having to be per-section. Jfw01 (discusscontribs) 23:49, 10 April 2024 (UTC)Reply
Hi everyone, I also agree to the idea of adding an AI generated template to Wikibooks. In that way all wiki users will have easier access to the content they desire. It will also make Wikibooks faster and more efficient to use without any mere issues or certain redirects, it will be like "straight to the point". But why only Wikibooks? I think all Wiki pages/websites must be integrated with an AI template, making it easier to access thew desired information, plus not all of us are English literate, so the AI system must support a wide diversity of languages worldwide. If this plan does succeeds, then Wiki- will become a major information site like Google. Google also has an AI intergrated into computing system but I haven't heard any one complaining about it's use. Rather you people enjoy using Google, and instead of giving other websites/companies to grow/expand, we're busy contemplating it's growth, which is unfair... Wiki should be given the right to do whatever it wants with it's websites. I think let's give them a chance and we'll see what happens further on...I see something amazing will come out of this...and we'll benefit from the results... Zeno C66 (discusscontribs) 09:44, 6 June 2024 (UTC)Reply
Do you-all intend to mandate that the reviewing human be independent of the human who added the AI content? If so, do you want a mechanism for tracking and testing that independence?Jfw01 (discusscontribs) 23:49, 10 April 2024 (UTC)Reply

My opinion

[edit source]

Copyright issues aside, if we were to allow use of AI, we would be overflowing in AI-generated "books" that are vapid and contain very little interesting content. I do not trust AI to produce material that is interesting to read. Conversely, I can readily spot AI-generated text when reading a passage, because it sounds very robotic. Also, an AI program would probably spit an entire book out in one long text, which does a disservice to our formatting capabilities and subpage system.

While someone might come along and improve an AI-generated book, the most likely outcome is that nobody will, and then we will have a glut of low-quality generated books that drown out the high-quality manually created books. This does not reflect well on the reputation of this project, and it makes our high-quality content much harder to find.

If I were to spot this kind of content in a book, I would remove it because it adds nothing. If I were to find a wikibook that is entirely ChatGPT output, I would nominate it for deletion because it contributes nothing to our mission, which is creating free-content knowledge.

In short: I believe that all AI-generated content should be banned, and that use of AI will automatically be a sanctionable offense. No exceptions. SupremeUmanu (discusscontribs) 19:40, 27 February 2024 (UTC) (Text modified SupremeUmanu (discusscontribs) 19:41, 27 February 2024 (UTC)).Reply

I am no expert on AI generated text, but it appears that this policy also addresses machine translation that can and should be allowed to help bring books published on different language Wikibooks into the English Wikibooks. Just my $.02 Ottawahitech (discusscontribs) 20:46, 27 February 2024 (UTC)Reply
I am fully in support of using machine translation to bring material from other languages’ Wikibooks into our project. SupremeUmanu (discusscontribs) 20:56, 1 March 2024 (UTC)Reply
Thanks for weighing in! I definitely agree that a risk of LLM use is a high volume of poor-quality books. And, I think that the draft policy's stipulation that LLMs "may not be used to generate original material" covers that well. Just to make sure I understand, though: what do you think about someone using an LLM to, say, generate a skeleton outline for a book/page but then actually write the content without the LLM?
Kittycataclysm (discusscontribs) 23:16, 27 February 2024 (UTC)Reply
Pinging @SupremeUmanu again to see what you think. Cheers —Kittycataclysm (discusscontribs) 19:20, 1 March 2024 (UTC)Reply
Sorry about that! I was not monitoring this page. I was indeed focusing on using ChatGPT to create content directly. I am fully in support of using ChatGPT to create outlines; I have used it for that very reason several times. And if we have a policy that says ChatGPT-generated material can be nominated for deletion if appropriate, I’m happy. Thanks so much! SupremeUmanu (discusscontribs) 20:53, 1 March 2024 (UTC)Reply

Overall comments

[edit source]

This is a great draft that has a good balance in responsible use of AI! Best, — Frostly (discusscontribs) 18:52, 1 March 2024 (UTC)Reply

I agree. "Just don't believe anything it claims on its own, and tag it" is already a great policy in itself. 192.208.157.145 (discuss) 20:40, 23 May 2024 (UTC)Reply

Just ban AI books

[edit source]

The flow of them if allowed will be insanely high and lead to the unusability of Wikibooks. WeaponizingArchitecture (discusscontribs) 19:05, 1 March 2024 (UTC)Reply

Hi @WeaponizingArchitecture. The policy as it stands would indeed prohibit AI-generated content (i.e. "original" text) for exactly the reasons you mentioned—many people here seem to agree with this. But, what do you think about the guidelines on using LLMs as 1) writing advisors (e.g. to provide rough outlines or suggested edits) or 2) machine translators? Not many people have weighed in on these aspects. Thanks! —Kittycataclysm (discusscontribs) 19:25, 1 March 2024 (UTC)Reply
I don't remember anyone ever reffering to pre-existing things like google translate as "AI" until this stupid techbro grift got flooded across the net. Could be wrong, but even then if people are using things like google translate for books on Languages they should be banned, google translate is a massive crock of garbageWeaponizingArchitecture (discusscontribs) 15:09, 4 March 2024 (UTC)Reply
@WeaponizingArchitecture: I am not sure but I think deep (which I know nothing about) is considered AI.
As far as your comments about google translate, I have been using it and I find its translations are getting better and better all the time. One thing you can do is translate back and forth a couple of times, then check the latest version against the original. Ottawahitech (discusscontribs) 17:29, 4 March 2024 (UTC)Reply
Some languages work fine but a lot of smaller languages, especially non-indo european language settings provide the worst translations ever. WeaponizingArchitecture (discusscontribs) 18:33, 4 March 2024 (UTC)Reply
Yeah, the policy already forbids fully AI written books, but I suspect that even if Wikibooks adopted an "open season" AI policy, not much would happen. Wikibooks isn't popular enough for people to care to write AI books. ForTheGrammar (discusscontribs) 18:08, 2 April 2024 (UTC)Reply
It certainly isn't now, but I can imagine the scenario where a slew of AI junk gets spammed here. —Justin (koavf)TCM 20:21, 2 April 2024 (UTC)Reply

Suggestion for AI generated Media (Watermarked)

[edit source]

I would like to address the issue that unless identified by a human moderator most content for AI generated media would not automatically be identifiable without a disclaimer.

As a result I think it would be useful to have a linked watermark that is used in all AI generated media (photos, videos, etc.) which would link to a page that discribes the prompt used by the human author to obtain the media and which tool was used.

The watermark could be included on the upload page to the wikibooks page with the link permanently linked with the uploaded content so that any copies of the media include both the watermark and the information on the linked page.

At a glance this would allow users of wikibooks to identify AI generated content. Human moderators would have an easier time to validate the media and could identify non referenced AI media.

Food for Thought - BallardB (discusscontribs) 16:12, 5 March 2024 (UTC)Reply

You are responsible ...

[edit source]

Hello, i am a rare visitor of wikibook but i use automatic translations sometimes. If the translationsystems a hint how future AI Books look like, i would expect a lot of rubbish.

NTL: the current draft says You are responsible for ensuring that use of an LLM will not disrupt Wikibooks. I thing i have an idea what is intended here, but from the practical point i do not like the sentence: How can i make sure that i do not disrupt Wikibooks ?. Should i ask somebody if the feels disrupted ? Please, drop this line as it is to vague. INHO the other lines are a guideline and ok with me. -- A1000 (discusscontribs) 12:30, 9 March 2024 (UTC)Reply

Hi @A1000, I cannot find the text you refer to with the word "disrupt". Are you looking at the current version of the draft?
Just curious, Ottawahitech (discusscontribs) 23:19, 9 March 2024 (UTC)Reply

@Ottawahitech:

Wikibooks:Artificial Intelligence -> Text generation -> 2.
hope that helps ... -- A1000 (discusscontribs) 23:38, 9 March 2024 (UTC)Reply
I think Wikibooks:Artificial Intelligence#Translation appropriately handles this. --SHB2000 (discusscontribs) 11:42, 10 March 2024 (UTC)Reply
@A1000, thanks for taking time to continue this exchange.
This is the complete paragraph that I see:
  • "2. You are ultimately responsible for the content generated by your use of an LLM. LLMs should not be used for tasks with which the editor does not have substantial familiarity, and their outputs should be rigorously scrutinized for both quality and compliance with all applicable policies. Editors who are not fully aware of the risks associated with LLMs and who are not able to overcome the limitations of these tools may not use them."
Do you see something else that shows the word disrupt in it? I am curious because in the past (on other wikis) I noticed that I was looking at an earlier version of pages, and I wonder if this is the case here too. Ottawahitech (discusscontribs) 15:37, 10 March 2024 (UTC)Reply

Reflections from the author of Chatbots for Social Change

[edit source]

I have been working on and off for a few months on a project which brings together research in the social sciences and cutting-edge LLM and machine-learning research to address the question of how chatbots can be used as a mediator or bridge, a universal intermediary and deep understander of persons' perspectives, goals, understandings, etc. As it felt in line with the project I have not been shy about using chatGPT to generate and edit content, as well as help scope and outline the book.

I lie in an interesting intermediate space in this discussion, being an expert in the content I'm having chatGPT help write, and in my experience it has been a great tool to grease the wheels.

That being said, I've found that its writing style, lack of in-depth knowledge, biases, and lack of context of the overall project (I don't put the whole book in each time I ask it to write something) means that in many cases I am better served writing content myself. However, the project as it stands now is very hard to separate into human-only and computer-aided content, and would likely need to be re-written in its entirety. It is by no means complete now, so it would not be a death-blow to the project, but would have a significant impact on the future of the project.

I can speak to the existential question posed in the initial suggestions section, "is there any point to wikibooks if it is just a step removed from asking Chat GPT for something? What is the value-add?" As of now it is impossible without additional tools to write a good wikibook using chatGPT with something as simple as a prompt. In my experience it needs significant guidance to construct something palatable. However, I have built tools as this project has progressed which could serve to break the construction of a wikibook into many smaller tasks, including academic and internet research, outlining, and writing of individual sections automatically. Whether continuing such a process to its conclusion would produce a good textbook, I have no idea. However, if it could do this well, the textbook would still serve a distinct and positive purpose above and beyond what chatting with a chatGPT can serve, just as a textbook serves a distinct purpose from chatting with a knowledgeable instructor. Indeed, if wikibook's content base grows exponentially, filling with high-quality textbooks, there is no problem to be solved. I don't want to diminish concerns of poor content flowing into the platform, though. I understand moderation may be impossible (at least, without the help of a powerful LLM bot).

I don't know what else to contribute here, but given my intense involvement in using chatGPT on wikibooks over the last months, I thought I would throw my voice into the discussion. Please feel free to comment, or ask questions.

--Amcgail (discusscontribs) 18:08, 11 March 2024 (UTC)Reply

@Amcgail, thanks so much for bringing Chatbots For Social Change to our attention. I have heard a lot of snippets about the topic of Artificial Intelligence recently, but I still do not understand what is considered AI and what is not. We have used software/computer programs/ applications/machine translations/etc for several decades now so I need to read your book to understand what is different about Chatbots. Also, since I assume that those Chatbots are available now at no (visible) cost, how long will they stay gratis?
Thanks in advance, Ottawahitech (discusscontribs) 16:36, 12 March 2024 (UTC)Reply
The actual cost of using these is already extremely low. I often use the API, and you can produce 25k words for $1 right now using GPT-4. In addition, I've seen reasonable performance of open-source models running locally on my PC (so that's gratis forever).
Also I will just emphasize that the book is very much in progress at the moment, but I can briefly answer your question. Chatbots are fundamentally new in that they can produce coherent text (or at least, coherent-sounding). In the worst-case scenario, one can produce a lot of reasonable-looking text which says incorrect stuff. Medium-worst, it's just a poor or bland presentation, and there's such a flood that it washes away the good stuff on wikibooks.
Hope this helps! Amcgail (discusscontribs) 21:25, 13 March 2024 (UTC)Reply
The climate emergency cost of using LLMs is likely to be huge, as well as the huge use of water. It's not gratis nor is it "extremely low" cost. We cannot pretend that external costs do not exist. We have access to knowledge. Boud (discusscontribs) 13:55, 21 March 2024 (UTC)Reply
Good point! @Ottawahitech was referring to monetary cost, presumably thinking that there may be a substantial paywall in the future. It's true that training these LLMs is extremely energy consumptive, but once they are trained their use is rather minimal in terms of energy consumption, comparing to other normal computing tasks. As I said, we can produce text using an LLM locally on our PCs. One can compare it reasonably with storing some files in the cloud, or browsing the internet. It looks like training GPT-3 took 284 MWh. That's equivalent to powering 27 American homes for a year, or 84 electric vehicles for a year. That's substantial, but it's a one-time energy expenditure, such that the model can be used at minimal energy cost forever after.
That being said, there are estimates that all of AI computing takes around 85 TWh per year. That's on par with Bitcoin mining, and a serious deal which should not be overlooked. Image generation is very power-hungry, around 20x to 100x as much as text generation, and something which typically can't be done using consumer electronics. And OpenAI just released footage from their video generation model, which I'm sure is even more power consumptive. Amcgail (discusscontribs) 14:54, 21 March 2024 (UTC)Reply

Voting timeline

[edit source]

I believe this draft policy has now been up for a month. I will work on making sure the feedback up to now is incorporated. I think it's reasonable to wait another month before opening a vote, unless others think otherwise. —Kittycataclysm (discusscontribs) 23:34, 23 March 2024 (UTC)Reply

No idea what we are voting on. Are we allowing AI edits? I suggested a total ban and others may feel the same.--Xania talk 05:27, 31 March 2024 (UTC)Reply
Hi @Xania! The vote would be on the policy as it is written. From what I've seen in people's comments, it seems like the main concern is specifically with AI-generated content, and the policy currently does ban that. There is some additional nuance, since LLMs can be used in other ways that are not related to generating content. For example, an LLM can suggest an outline for a book, as has been described above and in the policy, and it seems like this is much less controversial. Could you explain your specific concerns that lead you to disagree with the policy as is written? I'd like to make sure they're properly addressed! —Kittycataclysm (discusscontribs) 14:01, 31 March 2024 (UTC)Reply
Now I see. Hadn't noticed any policy till now. I'll certainly be voting against that. Should be no place for AI tools on Wikibooks (certainly not in the present day).--Xania talk 00:40, 1 April 2024 (UTC)Reply
Got it—though, again, I'd love to know what your specific concerns are so we can try to work through this together. Currently, there is no policy on AI use, which means it would be fully allowed by default if the policy were not enacted (and I don't want AI-generated content here either). Cheers —Kittycataclysm (discusscontribs) 02:13, 3 April 2024 (UTC)Reply
AI edits are similar to machine translations. They must be well regulated.--Jusjih (discusscontribs) 20:38, 3 April 2024 (UTC)Reply

Disclosure

[edit source]

I am not sure that disclosure on the talk page works for me. The thing that must be disclosed is past input into the substantive page. Disclosure on the substantive page and in the edit summary will stay in step with the page content, because they are all updated together. I see the talk page as somewhat independent, more future focussed, and vulnerable to having content removed while the corresponding content on the substantive page remains.

Jfw01 (discusscontribs) 00:14, 11 April 2024 (UTC)Reply

Machine translation

[edit source]

From above discussions, it seems like there is sufficient reasoning and perhaps consensus to ban all generative LLM use here. But, there is still an open question of how to address machine translation. If anyone has thoughts on how to address this specifically, it would be great if you could share! —Kittycataclysm (discusscontribs) 12:17, 16 April 2024 (UTC)Reply

I'd be pretty hesitant to support ai translation really, I think it's a great concept to get wikibooks translated into many more languages, but I'm not sure this actually helps with good, high quality translations
like, if I'm a fluent French speaker, and I'm reviewing something written like that, I'm likely not going to be checking everything as finely as I would be were I to translate it myself, there's no way to know how big of an issue this actually will be if allowed, but I've got a suspicion that there would be a lot of low quality or misleading translations that would just never be fixed
I also question whether finding someone who is both sufficiently educated on a particular subject and fluent in a given language who would also be unwilling to edit for wikibooks otherwise is likely. I genuinely don't know that this changes anything in a beneficial way in this regard
I'm happy for its use in aiding translation, or helping to judge accuracy, but I'm really not in support for the ai being used to translate sections into another language, it feels very similar to the idea of using it to generate books to me Nolshru (discusscontribs) 16:37, 18 April 2024 (UTC)Reply
Alternatively to machine translations being treated as sovereign entities, a plugin for translating books could be adopted; it could even use something open-source (like LibreTranslate) in order to stay true to FLOSS philosophies :) Chickenenthusiast (discusscontribs) 02:53, 29 April 2024 (UTC)Reply

From people's notes and some looking around, I'm thinking that LLMs should not be used for translation. I also think that we should develop a separate content translation policy, using w:Help:Translation, w:Wikipedia:Translation, and w:Wikipedia:Content translation tool as references. —Kittycataclysm (discusscontribs) 17:44, 24 May 2024 (UTC)Reply

Summary and consensus

[edit source]

It's been several months now, and I've done my best to adjust the policy based on the multiple viewpoints expressed here in order to represent a consensus. Here is what I see:

  • Overwhelmingly, there is a general consensus that LLMs may not be used to generate content. I think we can confidently accept this part of the policy.
  • There is some debate on whether an LLM may be used as an "assistant", with some people thinking that it should be blanket allowed and some thinking it shouldn't be allowed for reasons of potential error and inaccuracy. However, the policy as currently written bridges this gap by specifying 1) that editors must be familiar with the subject when asking an LLM for suggestions, 2) that editors must be able to justify the changes and are still subject to consensus, and 3) that editors who repeatedly fail to properly use LLMs may be blocked.
  • There is consensus that any LLM use must be disclosed in the edit summary as well as the talk page. There is no consensus on whether the page itself must be marked.

Did I miss anything? —Kittycataclysm (discusscontribs) 22:35, 24 May 2024 (UTC)Reply

I disagree with your conclusions. There is no overwhelming consensus. There are a few persons opposed, a lot of request for clarifications and even more confusions.
My view is that LLMs or any AI assistant are Tools (like a spell checker). No content fully automatically generated should be used on Wikibooks (this is covered in part by the bot policy) that all edits continue to be the responsibility of the contributors as well as basic quality control (that also falls to the community) that content is king and any copyright dispute should be responded but not preemptively restrict content inclusion on the grounds of possibilities or vague nuances.
My understanding is that LLMs creations are public domain (Under, and only relevant, US law) it is not our function or job to protect the copyrights of anyone else but our own and the aggregate (the project). So the only concern is the impact of any dispute that my rise in regard to each project (in this we are distinct from Wikipedia and each project should work out any issues internally). I do not see why the community should impose a project wide limitation. Nor have seen any argument that justifies it...
There is also the problem regarding how could you detect violations ? Why would the community check for violations if they could be useful with subsequent edits. The only way I can see for detecting problems is with a massive dump of at least almost valid content (not spam or vandalism), in so large a scale that it would overwhelm the community regarding examining content correctness, this is exactly what the bot policy deals with.
The issue seems to be reduced to the fear of automation and a perception of dilution regarding human responsibility and supremacy. Like the referred policy draft regarding translation (a draft, not a policy). Again, the contributor of content is responsible by the contributed content. What the community should objects is that but error or abuse irreversible damage is done to the project. --Panic (discusscontribs) 14:29, 27 May 2024 (UTC)Reply
@Panic2k4 thank you for weighing in! To make sure I understand you, let me see if I can summarize your points:
  • Non-human-generated content: No completely automatically generated content should be allowed; the primary risk of auto-generated content is overwhelming community moderation capacity.
  • Copyrights: We shouldn't be responsible for "protecting" the copyright of non-project creators (I admit I don't understand this argument).
  • Detection: There's no point in trying to identify violations of this policy (I don't fully understand this either).
  • Overall: We don't need a specific AI policy because aspects of it are already covered elsewhere, and editors should already be held responsible for content they add.
Did I get everything right? Cheers —Kittycataclysm (discusscontribs) 15:14, 27 May 2024 (UTC)Reply
Non-human-generated contributions (not content, we have no way to police that or need). Not all contributions involve content, it may be simply format or even bureaucratic work (making it automatic and or large numbers) is already covered in the bot policy.
Copyright, like authorship, is claimed, not attributed. While the copyright law (pushed internationally by the US) that removed the obligation to state the copyright notice made the public obligated in the default expectation that all content is owned unless it states it is not (this is idiotic and everyone not in the content industry should excuse himself to operate under this unwritten expectation). We can even point to the DMCA as a prof that the expectation is unworkable. Copyright is granted to authors as incentive, So yes one should act only on the protection of the rights he holds and as a contributor to Wikibooks to the aggregate that holds one's work. No other obligation falls to the community (us).
What may be confusing you is that a contribution is himself licensing content and in that case to Wikibooks, so obligated in ways that goes beyond the community as to protect himself by what he is “giving” us and the right he holds over it. Wikibooks community is not made of only content contributors, and in fact the project itself works to “liberate” to a reasonable working level any ownership and control interest that those contributors have over their contributed work.
Do you have a fail-safe way to detect content that has been generated by AI ? I do not think so, even water marked content can be falsified. Why should the community spend resources attempting to do that ? Who benefits from that ?
Overall, you got it. No valid need for the policy was given. I believe that some validity may be found in a per-project case (it depends on the content and contributor) that goes more into vandalism or copyright infringement things we already have covered or finally by our bot policy. Panic (discusscontribs) 05:42, 28 May 2024 (UTC)Reply
one should act only on the protection of the rights he holds and as a contributor to Wikibooks to the aggregate that holds one's work. No other obligation falls to the community (us). The use of copyrighted material without permission is illegal, so it's better for WikiMedia communities to police themselves than for the project to get dragged to court or face other legal action.
This is relevant to the discussion at hand because LLMs can reproduce copyrighted text. Clarinetguy097 (discusscontribs) 16:44, 16 July 2024 (UTC)Reply
Not exactly. The use of copyright material depends on the license (it does not require express permission) and the copyright state of the work. That is why copyright is claimed not attributed and in certain jurisdictions one pays taxes on the benefits over that IP (do not particularly know the minutia on the US state that affects Wikibooks donations).
Now each donation is a license to us by the contributor. So any liability for the donation under our license falls to the contributor, not us. We will never use any content without permission on Wikibooks, the issue is that permission (or the right to license it to us) may not exist and this would poison our work that used that content. This may be a greater problem to Wikipedia due to edit speed and that the articles only seen as having value independently (as part of a bigger work) but on Wikibooks each book project is isolated to a large degree and edits are slower and mostly by the same people (legally all books are part of a bigger book that is Wikibooks as we share the same setup and licensing. In the Cookbook this direct relation to Wikipedia is even more closely represented as each recipe constitute an independent unit of content).
So no, disagree we do not need to police ourselves more than we already do. The copyright holders does. I have mentioned in previous discussion around these subjects: it should be made as clear as possible to all copyrightable content contributors what doing so imply. But we live in a world that most don't care with about this issues, that is why large corporation exist around and people blindly "volunteer" work on Youtube etc... (regarding rights and compensation).
Regarding the Wikimedia being dragged to court because of a legal action about Wikibooks content that would be extremely hard, you should check the book projects we delete. In any case no one is objecting that volunteers do not verify large consistent contributions of content for copyright infringement but if you read above how would you do that regarding AI expressly after it has been defined what they produce as uncopyrightable...
It is beyond a LLM (as defined and if not intentionally created to do so) to replicate anything it consumes exactly, especially so if it has a large base of material learned as probabilistic would make it even harder to select the exact tokens in order. More good LLMs use filters that make them more variable on their responses. A true LLM has no memory, the model is a virtual coral like network structure of strongly connected tokens. In fact efforts are being made to trim that structure by removing unnecessary tokens/connections to make it small and more exact ,even task oriented. Now an LLM has no real memory beyond the tokens it keeps but an AI Agent could have a memory and most web facing agents that serve as portals to the LLMs of the big corporations may indeed have not only memory for the history of the prompts but keep and access data repositories (of all prompts and all content archived) but that is a copyright issue for those large corporations and their paying users not us at Wikibooks especially as what I kniow is that products of AI are not copyrightable, even if it replicated a work the layers of isolation between that and us is too large. Panic (discusscontribs) 21:41, 16 July 2024 (UTC)Reply
Yes, WikiBooks does have liability (WB:COPY).
As for LLMs copying content, they've already been copying stuff closely enough to count as plagiarism. Of course, once somebody develops an LLM that never plagiarizes, there will be no need for disclosures of AI-generated content (s). Clarinetguy097 (discusscontribs) 16:39, 17 July 2024 (UTC)Reply
That refers to use of the “given content” to the Wikibooks. It documents the obligations to us for use of our content. Now the liability for breaking our requirements is mostly null (this goes off-topic) but most of the violations (to my knowledge) are fixed by a request by Wikimedia to fix it or ignored and not pursued legally (often our content is already public domain or non-copyrightable or contributed anonymously there is also the point that re-use of Wikimedia content is rarely commercial and violation are mostly about removing any mention to Wikimedia brands and rebranding the content). Again, not our content to protect seems to be the rule of thumb.
"copying stuff closely enough to count as plagiarism" I myself have not been convinced by those that are crying wolf due to fears of AI and devaluation of their “creative” work. Again, LLMs by definition do not copy (they can't) they may recreate something very similar by mere chance. A copy requires the duplication of the original from source, they do not have that. As I said, an AI agent (or a simple bot, algorithm) with access to memory of works would easily make a copy. That is already covered on our policy.
Do not trust most of the media fire and brimstone regarding LLMs or they miraculous abilities they have, these are media exaggerations to protect its own interest groups. If you want to worry about it consider the evolutions of more complex AI agents that may use LLMs models to make sense of reality, and the level of autonomy they will be granted. Panic (discusscontribs) 21:47, 19 July 2024 (UTC)Reply