Skip to main content
added 353 characters in body
Source Link
JNat StaffMod
  • 25.8k
  • 17
  • 95
  • 129

Factually, equating Bard to LaMDA is wrong. LaMDA was an early Google large language model. Today, Bard uses Gemini. Between LaMDA and Gemini, Google also had PaLM. Also, as of 8 February 2024, Bard no longer exists - several Google offerings, such as Assistant, Bard, and Duet AI in Google Workspace, have been rebranded as Gemini.


— see revision 13 for changes that address this concern

I would also recommend clarity on the use of AI tools as editors. As we see these technologies translate into tools like translation services or editors (tools like Grammarly or Microsoft Editor), we should have considerations that these tools are more likely to be allowed. With my understanding of the heuristics that are available to moderators to guide judgement calls on if a post is AI or not, I think these tools would generally be OK. They tend to be more transformative than generative in nature but aren't always free from error. However, assuming they aren't generating new content, they can be more useful in improving the accessibility of the site and its content to a broader range of people, especially on the English-only sites in the network.


— see revision 13 for changes that address this concern. Guidance that needs to be more specific than the one proposed is probably better decided on a per-site basis, and changes are made locally to that site's help center article.

The edge case for questions about the use of generative AI also needs to be considered. Even on sites that prohibit the use of AI to generate answers, an answer including generative AI output is likely not prohibited if that output supports the answer. On Software Engineering, for example, I can see questions about the use of generative AI technologies to support requirements engineering. An answer that includes information about prompting techniques and examples of output (properly attributed) would be allowed, assuming that the rest of the answer was written by a human and the generated portions supported the rest of the answer.


— see revision 13 for changes that address this concern

I'm also struggling to reconcile the points made in the answer to why disclosure is needed with the continued stance that prohibitions on AI-generated content should be site-by-site rather than network wide. The three risks presented are presented in a way that is very compelling for outright prohibition of generated content. I would expect that many, if not most, reasonable people would look at those risks and conclude that curating generated content, even with attribution, is not worth the risks of noisy, false, misleading, unwanted, or otherwise low quality answers.

Factually, equating Bard to LaMDA is wrong. LaMDA was an early Google large language model. Today, Bard uses Gemini. Between LaMDA and Gemini, Google also had PaLM. Also, as of 8 February 2024, Bard no longer exists - several Google offerings, such as Assistant, Bard, and Duet AI in Google Workspace, have been rebranded as Gemini.


I would also recommend clarity on the use of AI tools as editors. As we see these technologies translate into tools like translation services or editors (tools like Grammarly or Microsoft Editor), we should have considerations that these tools are more likely to be allowed. With my understanding of the heuristics that are available to moderators to guide judgement calls on if a post is AI or not, I think these tools would generally be OK. They tend to be more transformative than generative in nature but aren't always free from error. However, assuming they aren't generating new content, they can be more useful in improving the accessibility of the site and its content to a broader range of people, especially on the English-only sites in the network.


The edge case for questions about the use of generative AI also needs to be considered. Even on sites that prohibit the use of AI to generate answers, an answer including generative AI output is likely not prohibited if that output supports the answer. On Software Engineering, for example, I can see questions about the use of generative AI technologies to support requirements engineering. An answer that includes information about prompting techniques and examples of output (properly attributed) would be allowed, assuming that the rest of the answer was written by a human and the generated portions supported the rest of the answer.


I'm also struggling to reconcile the points made in the answer to why disclosure is needed with the continued stance that prohibitions on AI-generated content should be site-by-site rather than network wide. The three risks presented are presented in a way that is very compelling for outright prohibition of generated content. I would expect that many, if not most, reasonable people would look at those risks and conclude that curating generated content, even with attribution, is not worth the risks of noisy, false, misleading, unwanted, or otherwise low quality answers.

Factually, equating Bard to LaMDA is wrong. LaMDA was an early Google large language model. Today, Bard uses Gemini. Between LaMDA and Gemini, Google also had PaLM. Also, as of 8 February 2024, Bard no longer exists - several Google offerings, such as Assistant, Bard, and Duet AI in Google Workspace, have been rebranded as Gemini.


— see revision 13 for changes that address this concern

I would also recommend clarity on the use of AI tools as editors. As we see these technologies translate into tools like translation services or editors (tools like Grammarly or Microsoft Editor), we should have considerations that these tools are more likely to be allowed. With my understanding of the heuristics that are available to moderators to guide judgement calls on if a post is AI or not, I think these tools would generally be OK. They tend to be more transformative than generative in nature but aren't always free from error. However, assuming they aren't generating new content, they can be more useful in improving the accessibility of the site and its content to a broader range of people, especially on the English-only sites in the network.


— see revision 13 for changes that address this concern. Guidance that needs to be more specific than the one proposed is probably better decided on a per-site basis, and changes are made locally to that site's help center article.

The edge case for questions about the use of generative AI also needs to be considered. Even on sites that prohibit the use of AI to generate answers, an answer including generative AI output is likely not prohibited if that output supports the answer. On Software Engineering, for example, I can see questions about the use of generative AI technologies to support requirements engineering. An answer that includes information about prompting techniques and examples of output (properly attributed) would be allowed, assuming that the rest of the answer was written by a human and the generated portions supported the rest of the answer.


— see revision 13 for changes that address this concern

I'm also struggling to reconcile the points made in the answer to why disclosure is needed with the continued stance that prohibitions on AI-generated content should be site-by-site rather than network wide. The three risks presented are presented in a way that is very compelling for outright prohibition of generated content. I would expect that many, if not most, reasonable people would look at those risks and conclude that curating generated content, even with attribution, is not worth the risks of noisy, false, misleading, unwanted, or otherwise low quality answers.

added 116 characters in body
Source Link
JNat StaffMod
  • 25.8k
  • 17
  • 95
  • 129

Factually, equating Bard to LaMDA is wrong. LaMDA was an early Google large language model. Today, Bard uses Gemini. Between LaMDA and Gemini, Google also had PaLM. Also, as of 8 February 2024, Bard no longer exists - several Google offerings, such as Assistant, Bard, and Duet AI in Google Workspace, have been rebranded as Gemini.

 

I would also recommend clarity on the use of AI tools as editors. As we see these technologies translate into tools like translation services or editors (tools like Grammarly or Microsoft Editor), we should have considerations that these tools are more likely to be allowed. With my understanding of the heuristics that are available to moderators to guide judgement calls on if a post is AI or not, I think these tools would generally be OK. They tend to be more transformative than generative in nature but aren't always free from error. However, assuming they aren't generating new content, they can be more useful in improving the accessibility of the site and its content to a broader range of people, especially on the English-only sites in the network.


The edge case for questions about the use of generative AI also needs to be considered. Even on sites that prohibit the use of AI to generate answers, an answer including generative AI output is likely not prohibited if that output supports the answer. On Software Engineering, for example, I can see questions about the use of generative AI technologies to support requirements engineering. An answer that includes information about prompting techniques and examples of output (properly attributed) would be allowed, assuming that the rest of the answer was written by a human and the generated portions supported the rest of the answer.


I'm also struggling to reconcile the points made in the answer to why disclosure is needed with the continued stance that prohibitions on AI-generated content should be site-by-site rather than network wide. The three risks presented are presented in a way that is very compelling for outright prohibition of generated content. I would expect that many, if not most, reasonable people would look at those risks and conclude that curating generated content, even with attribution, is not worth the risks of noisy, false, misleading, unwanted, or otherwise low quality answers.

Factually, equating Bard to LaMDA is wrong. LaMDA was an early Google large language model. Today, Bard uses Gemini. Between LaMDA and Gemini, Google also had PaLM. Also, as of 8 February 2024, Bard no longer exists - several Google offerings, such as Assistant, Bard, and Duet AI in Google Workspace, have been rebranded as Gemini.

I would also recommend clarity on the use of AI tools as editors. As we see these technologies translate into tools like translation services or editors (tools like Grammarly or Microsoft Editor), we should have considerations that these tools are more likely to be allowed. With my understanding of the heuristics that are available to moderators to guide judgement calls on if a post is AI or not, I think these tools would generally be OK. They tend to be more transformative than generative in nature but aren't always free from error. However, assuming they aren't generating new content, they can be more useful in improving the accessibility of the site and its content to a broader range of people, especially on the English-only sites in the network.

The edge case for questions about the use of generative AI also needs to be considered. Even on sites that prohibit the use of AI to generate answers, an answer including generative AI output is likely not prohibited if that output supports the answer. On Software Engineering, for example, I can see questions about the use of generative AI technologies to support requirements engineering. An answer that includes information about prompting techniques and examples of output (properly attributed) would be allowed, assuming that the rest of the answer was written by a human and the generated portions supported the rest of the answer.

I'm also struggling to reconcile the points made in the answer to why disclosure is needed with the continued stance that prohibitions on AI-generated content should be site-by-site rather than network wide. The three risks presented are presented in a way that is very compelling for outright prohibition of generated content. I would expect that many, if not most, reasonable people would look at those risks and conclude that curating generated content, even with attribution, is not worth the risks of noisy, false, misleading, unwanted, or otherwise low quality answers.

Factually, equating Bard to LaMDA is wrong. LaMDA was an early Google large language model. Today, Bard uses Gemini. Between LaMDA and Gemini, Google also had PaLM. Also, as of 8 February 2024, Bard no longer exists - several Google offerings, such as Assistant, Bard, and Duet AI in Google Workspace, have been rebranded as Gemini.

 

I would also recommend clarity on the use of AI tools as editors. As we see these technologies translate into tools like translation services or editors (tools like Grammarly or Microsoft Editor), we should have considerations that these tools are more likely to be allowed. With my understanding of the heuristics that are available to moderators to guide judgement calls on if a post is AI or not, I think these tools would generally be OK. They tend to be more transformative than generative in nature but aren't always free from error. However, assuming they aren't generating new content, they can be more useful in improving the accessibility of the site and its content to a broader range of people, especially on the English-only sites in the network.


The edge case for questions about the use of generative AI also needs to be considered. Even on sites that prohibit the use of AI to generate answers, an answer including generative AI output is likely not prohibited if that output supports the answer. On Software Engineering, for example, I can see questions about the use of generative AI technologies to support requirements engineering. An answer that includes information about prompting techniques and examples of output (properly attributed) would be allowed, assuming that the rest of the answer was written by a human and the generated portions supported the rest of the answer.


I'm also struggling to reconcile the points made in the answer to why disclosure is needed with the continued stance that prohibitions on AI-generated content should be site-by-site rather than network wide. The three risks presented are presented in a way that is very compelling for outright prohibition of generated content. I would expect that many, if not most, reasonable people would look at those risks and conclude that curating generated content, even with attribution, is not worth the risks of noisy, false, misleading, unwanted, or otherwise low quality answers.

deleted 31 characters in body
Source Link
Thomas Owens
  • 52k
  • 17
  • 99
  • 178

Factually, equating Bard to LaMDA is wrong. LaMDA was an early Google large language model. Today, Bard uses Gemini. Between LaMDA and Gemini, Google also had PaLM. Also, as Bard is concernedof 8 February 2024, there are reports that Google will be renaming Bard to Gemini in the not-toono longer exists -distant future. I would recommend rethinking the presentation of the examples of several Google offerings, such as Assistant, Bard, and Duet AI servicesin Google Workspace, have been rebranded as Gemini.

I would also recommend clarity on the use of AI tools as editors. As we see these technologies translate into tools like translation services or editors (tools like Grammarly or Microsoft Editor), we should have considerations that these tools are more likely to be allowed. With my understanding of the heuristics that are available to moderators to guide judgement calls on if a post is AI or not, I think these tools would generally be OK. They tend to be more transformative than generative in nature but aren't always free from error. However, assuming they aren't generating new content, they can be more useful in improving the accessibility of the site and its content to a broader range of people, especially on the English-only sites in the network.

The edge case for questions about the use of generative AI also needs to be considered. Even on sites that prohibit the use of AI to generate answers, an answer including generative AI output is likely not prohibited if that output supports the answer. On Software Engineering, for example, I can see questions about the use of generative AI technologies to support requirements engineering. An answer that includes information about prompting techniques and examples of output (properly attributed) would be allowed, assuming that the rest of the answer was written by a human and the generated portions supported the rest of the answer.

I'm also struggling to reconcile the points made in the answer to why disclosure is needed with the continued stance that prohibitions on AI-generated content should be site-by-site rather than network wide. The three risks presented are presented in a way that is very compelling for outright prohibition of generated content. I would expect that many, if not most, reasonable people would look at those risks and conclude that curating generated content, even with attribution, is not worth the risks of noisy, false, misleading, unwanted, or otherwise low quality answers.

Factually, equating Bard to LaMDA is wrong. LaMDA was an early Google large language model. Today, Bard uses Gemini. Between LaMDA and Gemini, Google also had PaLM. Also, as Bard is concerned, there are reports that Google will be renaming Bard to Gemini in the not-too-distant future. I would recommend rethinking the presentation of the examples of AI services.

I would also recommend clarity on the use of AI tools as editors. As we see these technologies translate into tools like translation services or editors (tools like Grammarly or Microsoft Editor), we should have considerations that these tools are more likely to be allowed. With my understanding of the heuristics that are available to moderators to guide judgement calls on if a post is AI or not, I think these tools would generally be OK. They tend to be more transformative than generative in nature but aren't always free from error. However, assuming they aren't generating new content, they can be more useful in improving the accessibility of the site and its content to a broader range of people, especially on the English-only sites in the network.

The edge case for questions about the use of generative AI also needs to be considered. Even on sites that prohibit the use of AI to generate answers, an answer including generative AI output is likely not prohibited if that output supports the answer. On Software Engineering, for example, I can see questions about the use of generative AI technologies to support requirements engineering. An answer that includes information about prompting techniques and examples of output (properly attributed) would be allowed, assuming that the rest of the answer was written by a human and the generated portions supported the rest of the answer.

I'm also struggling to reconcile the points made in the answer to why disclosure is needed with the continued stance that prohibitions on AI-generated content should be site-by-site rather than network wide. The three risks presented are presented in a way that is very compelling for outright prohibition of generated content. I would expect that many, if not most, reasonable people would look at those risks and conclude that curating generated content, even with attribution, is not worth the risks of noisy, false, misleading, unwanted, or otherwise low quality answers.

Factually, equating Bard to LaMDA is wrong. LaMDA was an early Google large language model. Today, Bard uses Gemini. Between LaMDA and Gemini, Google also had PaLM. Also, as of 8 February 2024, Bard no longer exists - several Google offerings, such as Assistant, Bard, and Duet AI in Google Workspace, have been rebranded as Gemini.

I would also recommend clarity on the use of AI tools as editors. As we see these technologies translate into tools like translation services or editors (tools like Grammarly or Microsoft Editor), we should have considerations that these tools are more likely to be allowed. With my understanding of the heuristics that are available to moderators to guide judgement calls on if a post is AI or not, I think these tools would generally be OK. They tend to be more transformative than generative in nature but aren't always free from error. However, assuming they aren't generating new content, they can be more useful in improving the accessibility of the site and its content to a broader range of people, especially on the English-only sites in the network.

The edge case for questions about the use of generative AI also needs to be considered. Even on sites that prohibit the use of AI to generate answers, an answer including generative AI output is likely not prohibited if that output supports the answer. On Software Engineering, for example, I can see questions about the use of generative AI technologies to support requirements engineering. An answer that includes information about prompting techniques and examples of output (properly attributed) would be allowed, assuming that the rest of the answer was written by a human and the generated portions supported the rest of the answer.

I'm also struggling to reconcile the points made in the answer to why disclosure is needed with the continued stance that prohibitions on AI-generated content should be site-by-site rather than network wide. The three risks presented are presented in a way that is very compelling for outright prohibition of generated content. I would expect that many, if not most, reasonable people would look at those risks and conclude that curating generated content, even with attribution, is not worth the risks of noisy, false, misleading, unwanted, or otherwise low quality answers.

Source Link
Thomas Owens
  • 52k
  • 17
  • 99
  • 178
Loading