36

In line with the proposal of Shog to improve the reject reasons, I think we should also teach reviewers to search for plagiarization whenever they hit a tag wiki/excerpt edit. So, I suggest that audits geared towards this end are also imparted to +5k rep reviewers that use the Suggested Edits queue.

The objective is to make reviewers aware that dumping articles from Wikipedia et al is unuseful and potentially harmful for the site, and transmit the same sentiment to editors.

7
  • What happens if the original content source changes by the time the audit is shown to the reviewer?
    – Troyen
    Commented Aug 21, 2014 at 0:54
  • @Troyen Wikipedia has an API to get an updated excerpt of the articles.
    – Braiam
    Commented Aug 21, 2014 at 1:02
  • 2
    "Select text → Right-click → Search on Excite" - That's just too much compared to "Approve"
    – random
    Commented Aug 21, 2014 at 1:18
  • 1
    @random well... I prefer that to the alternative.
    – Braiam
    Commented Aug 21, 2014 at 1:27
  • Related, but the retroactive version How can tag wiki plagiarism be found effectively?
    – Braiam
    Commented Jan 7, 2015 at 4:23
  • Who posts stuff from Wikipedia? I've only used content from sites like NPM, Bower or github. We aren't writers of fiction, we're coders and programmers trying to help others figure out the fastest / best way to accomplish their goals or fix their problems. I believe copy and paste should be fine, and with links attached of course, how are you suppose to originally state something like this: npm install gulp?
    – Leon Gaban
    Commented Feb 20, 2016 at 14:49
  • @LeonGaban that's irrelevant. The tag wiki/excerpts are meant to explain how tags are used for our own and other users. Isn't meant for mirroring external resources.
    – Braiam
    Commented Feb 20, 2016 at 21:32

1 Answer 1

-3

Bad idea. We should know better. Let humans do what humans do best, and let machines do what machines do best. Auditing humans to check if they're doing work well (plagiarism detection) when they shouldn't be doing that work at all is not productive.

6
  • 2
    This post seems to indicate humans don't know better. Commented Aug 14, 2015 at 10:12
  • 1
    @SuperBiasedMan: Ah, parsing ambiguity. We should know better than to leave plagiarism detection entirely to humans. We should automate the search, which makes it unnecessary to audit whether humans searched. Commented Aug 14, 2015 at 10:18
  • 3
    This would be a much stronger argument if you could show how easy it is to get high-quality automated plagiarism detection. Commented Aug 14, 2015 at 17:32
  • @NathanTuggy: The plagiarism detectors used in Academia aren't that high quality, but "copied from Wikipedia" is not hard for them. Commented Aug 18, 2015 at 7:47
  • 2
    @MSalters: OK. Show me an example or citation or whatever. Y'know, back up your statement. Commented Aug 18, 2015 at 8:06
  • @NathanTuggy: Example: turnitin.com aka plagiarism.org . (I'm not affiliated with them; they just mentioned Wikipedia specifically as a source they check against). Commented Aug 18, 2015 at 8:18

Not the answer you're looking for? Browse other questions tagged .