Skip to main content
23 events
when toggle format what by license comment
Mar 6, 2018 at 16:36 comment added Undo That's exactly how I see it, @alexr101. Yes, the rest of the posts would keep the current status quo.
Mar 6, 2018 at 16:35 comment added alexr101 @Undo Yeah, I get the risk factor, but if the accurracy for these higher risk posts is 100% (so far) then the chances are extremely low that a legit post will get deleted. Since the rest of the posts will still get only 3 flags, right? That's just my 2 cents. I don't see that much risk given the reward :)
Mar 6, 2018 at 16:10 comment added Undo @alexr101 Five flags is far more risk than three, as a bunch of people here note. We take that into account when choosing accuracy levels. Our current consideration would have been 100.00% accurate over the 29,702 posts that would have matched this condition. See the accuracy graph. Fun fact - that one red dot in that graph isn't there any more. The system found more data in the last few days, learned from it, and that post's weight dropped below the threshold. I expect total percentage of flagged posts getting 5 flags, going forward, to be around 63%.
Mar 6, 2018 at 15:43 comment added alexr101 I think the key here is that, 5 flags is only for posts that have a higher probability of being spam. So we should ask more questions. What percentage of total flagged posts is this? And what are the current accuracy stats on this "level' of posts?
Mar 6, 2018 at 0:37 comment added ɥʇǝS What is we asked autoflagging users to put something in their bio about it? Not ideal but might alleviate a bit in lieu of bot accounts.
Mar 5, 2018 at 16:59 comment added Andy @Llopis, We've discussed that situation with a local moderator. The solution we've agreed on is to refrain from posting comments indicating how self promotion can be perceived.
Mar 5, 2018 at 16:50 comment added llrs @ArtOfCode About the self-promotion. In Bioinformatics we recently had problems when a new user posted an answer with his own software and a user came from the chat with some aggressive wording . See this chat. It has triggered a discussion on the meta site to change the restrictions. But I would advocate to let the beta site on their own.
Mar 5, 2018 at 16:47 comment added Undo We'll look at that, then. I'll let you know if we come up with a different way to do it.
Mar 5, 2018 at 16:40 comment added Mad Scientist @Undo there's still be a bit of potential confusion for moderators that don't know how it works, but it would solve almost all of my transparency concerns.
Mar 5, 2018 at 16:32 comment added Undo Highly unlikely to happen @JohnDvorak - that'd be dev time, which isn't justifiable for this small a benefit.
Mar 5, 2018 at 16:08 comment added John Dvorak @Undo ideally said bot account would be generated by the grace of CMs, complete with unlimited flags and the ability to flag the same post multiple times.
Mar 5, 2018 at 16:00 comment added Undo I was thinking a spam flag, but from an obvious bot account @ArtOfCode. Custom flags are messy in other ways.
Mar 5, 2018 at 16:00 comment added ArtOfCode @Undo by that I assume you mean a custom flag?
Mar 5, 2018 at 15:58 comment added Undo Let's explore that a bit @MadScientist. I'm not promising this, but it's an option: What if each post got one flag from the SmokeDetector account (with relevant profile links), and the rest done the same way as now? I think that'd solve your concern without burdening us with creating 5-10 accounts and getting rep.
Mar 5, 2018 at 15:55 comment added Mad Scientist @Undo I don't see any solution except dedicated bot accounts with a link to an explanation in their profile. Everything else would rely on moderators knowing about Charcoal in the first place, and knowing where exactly to look. Dedicated accounts would allow moderators to review the flags with the existing tools, and it would make it clear to them that they are automated.
Mar 5, 2018 at 15:18 comment added Undo What would be the best way for us to resolve your concerns? We can do nearly any reporting you need, just let us know what that might look like.
Mar 5, 2018 at 9:54 comment added John Dvorak The best way to improve transparency would be to have a dedicated account (or pseudo-account or five) to do the flagging, but that would require dev team assistance (and the discussion of its merits should go to Petter's answer)
Mar 5, 2018 at 9:10 comment added ArtOfCode Oh - and on the self-promotion thing: you're right, autoflags stay away from that. It's too much of a varied issue for us to apply a network-wide filter to.
Mar 5, 2018 at 9:10 comment added ArtOfCode I can understand not being entirely comfortable with it. Let us know if we can alleviate that, of course. FWIW, I don't believe I've ever seen a bad spam flag that's gone unnoticed by the Charcoal team. That's anecdotal, of course, not hard evidence.
Mar 5, 2018 at 9:08 comment added Mad Scientist @ArtOfCode this puts the review of potentially bad spam flags entirely on the Charcoal team, the individual site communites and mods can't review this case (unless they know about Charcoal and monitor the site actively). I'm slightly uncomfortable with that.
Mar 5, 2018 at 9:08 comment added ArtOfCode Meanwhile, the transparency issue is something we're always looking to improve where possible. We'd like to hear it if you've got suggestions on that front (that goes for anyone, not just @Mad).
Mar 5, 2018 at 9:05 comment added ArtOfCode The graph in the post illustrates the accuracy of the automatic part of 5 flags. As for the human element on that last flag, you're right that it's not necessarily independent. However, while some people are more zealous than others in flagging, multiple (significant, not just 2) people look at each post, and any disagreement will start raising alerts. That means it would take probably 4 or 5 people agreeing the post was spam for a false last flag to go unnoticed, and even then I might still pick it up when I read the day's transcripts.
Mar 5, 2018 at 8:58 history answered Mad Scientist CC BY-SA 3.0