23
\$\begingroup\$

One of PPCG's oldest rules, mentioned in What topics can I ask about here?, is that all answers to challenge questions must:

Be a serious contender for the winning criteria in use. For example, an entry to a code golf contest needs to be golfed, and an entry to a speed contest should make some attempt to be fast.

While we're still working on the wording, at least for code golf challenges, we have some guidelines and rarely ever dispute whether a particular post is a serious contender or not.

For other types of challenges, the boundaries seem less clear. In particular, these two answers seem to divide the community.

Both answers have several things in common:

  • Compared to other answers, they use a very simple strategy.
  • They have a worse score than all other answers to their respective challenges.
  • They have more upvotes than all other answers to their respective challenges.
  • They have more downvotes than all other answers to their respective challenges.
  • Supporters of these answers argue that they're useful as baseline submissions, as answers with more complex approaches should be compared to these ones.
  • Critics of these answers argue that they're not serious contenders for the winning criteria in use. Both answers have been flagged for moderator attention.

Voting habits are out of the scope of this discussion, but I do hope to reach a consensus with regard to these two questions:

  1. Should these answers be considered serious contenders?

    If not, they'd have to be removed in compliance with our policy about answers not meeting the challenge specification.

  2. If the answer to the above question is no, how could baseline solutions be posted instead?

    Just because something isn't a valid answer according to our rules, doesn't mean that is isn't valuable at all.

\$\endgroup\$
7
  • \$\begingroup\$ Shouldn't a baseline be a serious contender? I feel like a base line is of no use if virtually any naive approach beats it. \$\endgroup\$
    – Wheat Wizard Mod
    Commented Jan 15, 2018 at 16:24
  • 3
    \$\begingroup\$ I'd like to bring up another potential baseline: On a kolmogorov-complexity challenge, do we allow a print "Text goes here" submission? Because that's the smallest trivial answer. \$\endgroup\$ Commented Jan 15, 2018 at 22:55
  • 1
    \$\begingroup\$ Related link: Why is this non-serious-contender answer still around, despite a “helpful” flag? \$\endgroup\$
    – DELETE_ME
    Commented Jan 16, 2018 at 1:20
  • \$\begingroup\$ What does it mean: Baseline solution? \$\endgroup\$
    – user58988
    Commented Jan 16, 2018 at 8:15
  • \$\begingroup\$ Related discussion on standard loopholes. \$\endgroup\$ Commented Jan 17, 2018 at 14:15
  • 1
    \$\begingroup\$ I don't have an answer, but barring these types of solutions on the basis of them being not competitive is no different from barring FGITW answers. Both take little time or effort to golf, both are received "well" (by voting), and, in the case of FGITW, they are both often suboptimal. Should we then bar FGITW answers because they are not optimal? Should we also ban people from golfing in Unary, Starry, etc., because they perform worse? I don't think baning these "baseline" answers can be done easily without implications for other areas of this site. \$\endgroup\$ Commented Jan 20, 2018 at 19:12
  • \$\begingroup\$ @AdmBorkBork More related discussion on standard loopholes. \$\endgroup\$ Commented Jan 21, 2018 at 20:52

11 Answers 11

16
\$\begingroup\$

If a baseline solution is given, it should be included in the challenge text, not posted as an answer

While baseline solutions are useful, they are still (often) not serious contenders for winning the challenge, so they have no place as answers. So, include them in the challenge text instead. This is common practice: many existing challenges include reference implementations where such implementations would be considered non-competing, but still are a useful resource for users seeking to answer the challenge.

\$\endgroup\$
19
  • 3
    \$\begingroup\$ If a challenge is difficult, can I provide a completely non-golfed answer? suggests exactly that for code golf. \$\endgroup\$
    – Dennis
    Commented Jan 16, 2018 at 2:17
  • \$\begingroup\$ Would we be good with any user editing in a baseline solution into the challenge text? \$\endgroup\$
    – xnor
    Commented Jan 16, 2018 at 2:23
  • 2
    \$\begingroup\$ @xnor If there isn't one (or if the existing one is flawed/buggy in some way), I wouldn't mind users editing one in. IMO it's not that different from users editing in additional details for a challenge (like a Wikipedia link). \$\endgroup\$
    – user45941
    Commented Jan 16, 2018 at 2:25
  • \$\begingroup\$ My thought exactly. Possibly it would even be useful to include several baselines, along with their scores: for example, the "most common character" solution, the "print exact text" solution, etc. \$\endgroup\$ Commented Jan 18, 2018 at 4:21
  • \$\begingroup\$ To prevent silly solutions, the challenge author can of course require that all entries perform at least better than the baseline. \$\endgroup\$
    – Sanchises
    Commented Jan 18, 2018 at 13:56
  • 1
    \$\begingroup\$ @Sanchises Many people agree that it's not a good idea to do that \$\endgroup\$
    – user45941
    Commented Jan 18, 2018 at 14:21
  • 1
    \$\begingroup\$ @Mego So it seems. I think Martin offers a good argument as a reply to your comment, there, that it is obvious anyway and doesn't need to be enforced explicitly. \$\endgroup\$
    – Sanchises
    Commented Jan 18, 2018 at 14:37
  • \$\begingroup\$ @Mego How do you define a baseline? If we are going to disallow them, we need a good definition of baseline. \$\endgroup\$ Commented Jan 22, 2018 at 20:34
  • \$\begingroup\$ @NathanMerrill A baseline solution is a solution which is not a serious contender, but may have some value as a comparison for other, serious contenders. That should be clear from the question. \$\endgroup\$
    – user45941
    Commented Jan 22, 2018 at 21:22
  • \$\begingroup\$ @Mego then we should remove many bots from this challenge and other challenges. Most KoTH submissions follow a simplistic strategy, and is quite a stretch to consider them a "'serious contender". \$\endgroup\$ Commented Jan 22, 2018 at 23:25
  • \$\begingroup\$ @NathanMerrill I have no desire to debate the application of this proposed policy on every single challenge. \$\endgroup\$
    – user45941
    Commented Jan 23, 2018 at 0:04
  • 1
    \$\begingroup\$ @NathanMerrill KotHs are, by definition, very different beasts from ordinary programming challenges. They are designed so that there is not one optimal strategy, and the competitiveness of one strategy also depends on the other strategies being employed. Thus, it's harder to say that a given bot is not a serious contender, aside from obvious cases like suicidal bots (which are already banned as a standard loophole). As long as a bot makes some effort to optimize their score, I think it's fair to say that it's a serious contender. \$\endgroup\$
    – user45941
    Commented Jan 23, 2018 at 2:54
  • 1
    \$\begingroup\$ As an example of how difficult this can be to define for a KotH, I posted a KotH with 2 very basic example answers (thinking they would end up in last place), and there are several far more complex answers below them on the leaderboard. \$\endgroup\$ Commented Jan 25, 2018 at 21:35
  • 1
    \$\begingroup\$ @NathanMerrill The OP can obviously do whatever they want, but if this became a rule, and there was a strong consensus in support of it, then they shouldn't be suprised if their question got downvoted or closed for ignoring the rule. \$\endgroup\$ Commented Jan 26, 2018 at 1:38
  • 1
    \$\begingroup\$ @NathanMerrill I personally don't think it matters, but you can argue that point if/when it ever becomes a rule. \$\endgroup\$ Commented Jan 26, 2018 at 10:05
13
\$\begingroup\$

"Serious contender" applies to the approach, and the resulting implementation

We already apply this methodology to entries, even if we don't realize it. For example, an iterative solution may score 75 bytes while a recursive solution in the same language scores 70. Or a solution in language X may score 80 bytes, while a solution in golflang Y scores 5. In any case, we don't declare the longer solution a non-contender, just because it happens to be losing, provided that an effort is made to optimize for the particular winning criteria (e.g., extraneous whitespace).

The same thing applies here. (Or, for that matter, to challenges that have "joke" submissions to get the ball rolling, but that's an aside.)

For Starry Night, an approach is to simply average the entire image. It might not be the best approach, and it turns out that it isn't the best approach. The answer, however, made a serious attempt at solving the challenge for that particular approach. The author even updated the solution to a better color after an optimization was pointed out.

For Moby Dick, an approach is to output the most common character. It might not be the best approach, and it turns out that it isn't the best approach. The answer, however, made a serious attempt at solving the challenge for that particular approach, by using the most common character and writing it in a language that optimized that portion of the scoring.

Just like with the code-golf answers, there could be (and likely will be) a better approach, but that doesn't make these answers invalid.

\$\endgroup\$
5
  • \$\begingroup\$ I agree with the body but don't see how the title of your answer matches. People seem to argue that "simple" approaches do not fill the "serious effort" definition of serious contender. \$\endgroup\$
    – Fatalize
    Commented Jan 17, 2018 at 15:53
  • \$\begingroup\$ @Fatalize I'm arguing that just because the approach isn't the best approach doesn't mean the answer is invalid. If that message didn't get through and there's a better way to word that, please let me know. \$\endgroup\$ Commented Jan 17, 2018 at 16:05
  • 2
    \$\begingroup\$ I disagree with your premise. Implementing a trivial, very sub-optimal solution to a code challenge is akin to writing code without making an attempt to golf it in code golf. \$\endgroup\$
    – user45941
    Commented Jan 17, 2018 at 17:13
  • 9
    \$\begingroup\$ I think it's somewhat disingenuous to assert that either of the answers mentioned were a serious attempt to solve the challenge. That's certainly not how most voters viewed them. If they were judged on the quality of the implementation, they wouldn't have so many upvotes. My guess is most voters just thought they were good jokes. And that's fine if those are the kind of answers you want on the site, but then you might just as well make everything a popcon. \$\endgroup\$ Commented Jan 17, 2018 at 17:39
  • 2
    \$\begingroup\$ @JamesHolderness I disagree. I think they are highly upvoted because they are simple and easy to understand. When you go to a question where all the other answers are a bit over your head, it makes sense to upvote one you can understand at a glance. \$\endgroup\$
    – mbomb007
    Commented Jan 26, 2018 at 21:08
7
\$\begingroup\$
  1. Should these answers be considered serious contenders?
  2. If the answer to the above question is no, how could baseline solutions be posted instead?

I think there's a problem in the framing of the question, because in the case of the Moby Dick answer I would argue that it is neither a serious contender nor a baseline solution. The baseline solution there is either (a) the literal text with a charAt call, for a score slightly greater than 1215235; or (b) a compressed version of the literal text with uncompression and charAt, with a score slightly greater than 762421.

I would have said (a) unconditionally were it not that the sandbox entry (probably only visible with 20k rep) explicitly says that it is designed with the intention that (b) not be a winning strategy, suggesting that the author intended (b) as a baseline.

That makes that answer in particular >25% worse than the baseline.


As for what serious contender means, I would echo Mego's answer to one of the linked questions:

In short, if the only way a submission could win a challenge is if no other solutions were posted, it's almost certainly not a serious contender

On that basis, baseline solutions are not serious contenders (and worse-than-baseline solutions a fortiori are not either).

\$\endgroup\$
4
  • 1
    \$\begingroup\$ Note that one implication of that criterion is that being better than the baseline is not sufficient to be a serious contender. This is as it should be: in code golf, removing whitespace without golfing variable names would beat the baseline but a serious contender should at the very least do both. \$\endgroup\$ Commented Jan 16, 2018 at 14:07
  • 6
    \$\begingroup\$ While I agree that "charAt" is a good baseline, I can also see "Most common character" as a good baseline. In essence, I think that "a baseline" is any strategy that is as simple as possible. The strategy of "Guess the most common character" is employed by many answers, but the baseline answer only does that and nothing else. \$\endgroup\$ Commented Jan 16, 2018 at 14:53
  • 1
    \$\begingroup\$ For what it's worth, I doubt /// can do better than the answer in question. \$\endgroup\$
    – CAD97
    Commented Jan 16, 2018 at 23:21
  • \$\begingroup\$ @CAD97 I don't think such code-challenges (unlike code-golf) like this is independent for each language. \$\endgroup\$
    – DELETE_ME
    Commented Jan 17, 2018 at 14:15
7
\$\begingroup\$

Yes, but it depends on how you define baseline

I'd like to make 2 points here:

  1. Baseline submissions are defined by the strategy they choose
  2. It's hard (if not impossible) to define a line that divides the baseline and the winner

1. Baseline submissions are defined by the strategy they choose

I believe that baseline submissions can be competitive. But first, an analogy:

Let's say you're playing some laser tag with some friends. Some competitive strategies you could try are:

  • Run around shooting everybody, trying to hit as many others as possible.
  • Hide in a spot, trying to avoid getting hit, while defending against those approaching
  • Team up with others to defeat large groups and defend each other

Now, while the winner depends on the exact scoring rules and execution, the winning submission will most likely employ multiple strategies, or do a particular strategy really well.

However, now lets say there's a KoTH with a similar premise. Each of the following baseline bots would be acceptable:

  • def action(): stepForward(); shoot();

  • def action(): walkToWall(); shoot();

  • def action(): walkToAlly(); shoot();

Here are some that wouldn't be allowed:

  • def action(): shoot();

  • def action(): walk();

  • def action(): die();

Each of the baseline submissions follow a legitimate, competitive strategy. None of them will likely win because their implementation is too simple.

Therefore, baseline is defined by the strategy, and if that strategy is competitive.

2. It's hard (if not impossible) to define a line that divides the baseline and the winner

"Mean of all channels" is following a legitimate strategy "pick a color similar to the image". If we disallow the baseline submission:

  • Would I be allowed to submit a simple, two color image with a dark blue bar on the left, and a light blue bar on the right? What about the Java Voronoi submission?

  • Would I be allowed to post a 2x2 pixel image that was scaled up to the appropriate size? What about 32x32?

  • Would I be allowed to use a super-lossy, built-in compression algorithm? Does it matter how lossy that compression algorithm is?

It's easy to tell if a submission has a strategy. (A white image is invalid because there's no strategy). It's hard to differentiate between a baseline strategy and the winning strategy. This leaves us with 3 options:

  1. Disallow all submissions that don't win (in their respective language)
  2. Make "serious contender" an ambiguous line between "baseline" and "winning" that we need to vote and debate about every time.
  3. Allow baseline submissions as long as they are implementing a viable strategy.
\$\endgroup\$
10
  • 2
    \$\begingroup\$ I'd really love some feedback from those that disagree, specifically related to my last point: Do you think that there are more than 3 options? Or do you prefer option 1 or 2? \$\endgroup\$ Commented Jan 16, 2018 at 17:02
  • 1
    \$\begingroup\$ I personally upvoted pretty much entirely for that last point. IMHO, baselines are defined by their algorithm. Any post should be allowed as long as it actually golfs the code (i.e. no unneeded whitespace, long var names, etc.) or makes an attempt (in the context of that starry night question, choosing an average color instead of the white bit). I think that as long as an answer is competitive in it's chosen approach, it should be allowed. \$\endgroup\$
    – Riker
    Commented Jan 17, 2018 at 2:02
  • \$\begingroup\$ Option 4. Define the baseline solution(s) in the challenge text (as suggested by Mego). While sandboxed, try and think of any obvious baselines that the OP has missed and suggest they be included in the question. I think you'll find people far less likely to insist on the validity of these no-effort answers when they haven't been posted yet and they're not trying to defend a 100+ score. \$\endgroup\$ Commented Jan 22, 2018 at 19:37
  • \$\begingroup\$ @JamesHolderness That still has the same problems: How do you differentiate between a baseline and a legitimate strategy when deciding what goes in the post? \$\endgroup\$ Commented Jan 22, 2018 at 20:33
  • \$\begingroup\$ You don't need to eliminate all poor strategy solutions. Most of those will automatically get fewer votes on account of their poor performance. All you're trying to do is identify the really obvious, no-effort solutions that get lots of upvotes for the wrong reasons. I think that's a far easier problem to solve than trying to decide the dividing line between "legitimate" and "baseline". \$\endgroup\$ Commented Jan 22, 2018 at 21:56
  • \$\begingroup\$ @JamesHolderness I really like that suggestion - setting a boundary in the challenge before there is any emotional attachment to an answer. However, minimum scores are not recommended and I strongly agree with that decision. In particular, the very appealing idea of setting a "just good enough" score falls down because different languages will have very different scores for the same challenge. \$\endgroup\$ Commented Jan 25, 2018 at 21:47
  • \$\begingroup\$ @trichoplax I agree that a minimum score is not a good idea, but I don't think that's needed for this concept to work. You could just add a restriction like: "A single hardcoded character is not an acceptable answer" for the text challenge; or "A single hardcoded colour is not acceptable" for the image challenge (arguably these are already standard loopholes). That doesn't mean all answers have to score better than those baselines, though. You're just trying to encourage answers that show more effort than that. \$\endgroup\$ Commented Jan 25, 2018 at 22:31
  • \$\begingroup\$ @JamesHolderness I'd be totally ok with that, but that's basically the same situation we have today. The burden of defining what is allowed or isn't is up to the challenge writer, and if a baseline is found later and posted as an answer, then it'd be poor form for the challenge writer to exclude it. \$\endgroup\$ Commented Jan 25, 2018 at 22:40
  • \$\begingroup\$ Also, in both of the cases above, the challenge writer has been in favor of the baseline being posted. \$\endgroup\$ Commented Jan 25, 2018 at 22:40
  • \$\begingroup\$ @NathanMerrill There are plenty of rules that we expect challenge authors to abide by. If there is a consensus that these kinds of answers are not wanted on the site, and assuming baselines in the question actually solve that problem, then that simply becomes another guideline that challenge authors are encouraged to follow. \$\endgroup\$ Commented Jan 26, 2018 at 1:25
5
\$\begingroup\$

Reword what a serious contender is, it is too elitist

Our current definition of serious contender is that it “is a submission which makes a serious effort towards optimizing the submission's score”.

This is in my opinion not the right definition. In particular, a new golfer could submit an answer that can be golfed a lot with simple, well-known tricks. With that definition, one could argue that such answers are not serious contenders because the newcomer did not make a serious effort into learning simple golfing tricks to improve their answer, which is obviously ridiculous.

A better definition to me would be:

A serious contender is a submission which does not make a deliberate effort towards degrading its own score.

With this definition, common non-contender issues such as long variable names are still non-contender. However, beginner answers are now objectively serious contenders.

And with this definition, "baseline" answers (whatever it may be) are also serious contenders, because they are not intentionally trying to get a bad score.

We also have to remind ourselves that not everyone can come up with strategies to solve questions that the seasoned PPCG user would consider "interesting".

Note

I also consider our current definition of serious contender to be bad for the following reasons: many PPCG challenges can be solved by trivially chaining a bunch of built-ins in many languages, which is far from requiring "a serious effort".

\$\endgroup\$
3
\$\begingroup\$

The problem is interesting -- who cares what the "baseline" is

TL;DR: I want more answers, not fewer. I want to explore ideas.

I think that both "Moby Dick" and "Starry Night" are really interesting questions, and both of them have some great answers. But I also think we've taken a wrong turn somewhere: By even discussing to limit the answers to "serious contenders" by this or that definition, we are killing some of the possibilities to create and to explore the very concepts behind the questions.

To me the two questions are great because they make me think more than ordinary code golf questions, which usually have a somewhat upper limit to the amount of optimization. These particular questions on the other hand are more open ended, with virtually no limit for improvement, while they also have a great measurement for "success"; "Moby Dick" has a specially well balanced scoring system.

Now, it's not only the most successful (by an arbitrary score) answers that are interesting*.

For instance, the first thing I thought of were n-grams and what is the most common successor of input x? Turns out, someone already posted an answer using something like that. But the concept of n-grams is important, and simple and interesting. So then I started to think what variations of this can I make? Can it be simplified? Generalized? That's when I tried the "most common distance" function. It doesn't do great. In fact, it's about 15% worse than always returning a space (here's a pastebin comparing the two).

The premise sounds kinda promising, but the result is disastrous. By the scoring system it's not competitive at all, but to me that doesn't make it any less interesting. In fact this is an interesting finding by its own right. It also makes you appreciate the other answers more because you can see how much better they do, and some with quite a small amount of code.

But if "always return space" should be forbidden because of its bad score, then obviously this other solution is way under the threshold. And this I think is a pity. Because I don't really care that much for the one best answer with the one best score; I like to see the whole picture from as many angles as possible. This is what all of you want to forbid, and I can't see why.

By instead allowing any answer, regardless of an arbitrary score, we can make up a catalogue of concepts around the subject, and explore and discuss more paths and theories. Any answer with an original idea is worth keeping.


*) Not saying that my "print a space" solution was super interesting. I don't care. This is not a defense of that post, this an argument to allow exploring ideas.

\$\endgroup\$
4
  • \$\begingroup\$ I agree with the majority and the sentiment of this post: Let's not kill creativity and interesting submissions. However, your last sentence is too strong for me. I don't think that any original idea is worth keeping. If there was a submission that randomly picked a letters (or tried to pick the wrong letters), I don't think that it would be worth keeping, even though it would be original. \$\endgroup\$ Commented Jan 16, 2018 at 22:59
  • 8
    \$\begingroup\$ Since the very beginning this has been a contest site, not a catalogue site. You're asking for a significant change in scope. (And FWIW the word "catalogue" in itself carries baggage: on this site it has sufficient history of being used as a justification for posting crap). \$\endgroup\$ Commented Jan 16, 2018 at 23:01
  • \$\begingroup\$ I get that @PeterTaylor. I'm trying to drag discussion in another direction because I think the very premise of the discussion is off. \$\endgroup\$
    – daniero
    Commented Jan 17, 2018 at 6:36
  • 1
    \$\begingroup\$ Also, we still have an objective winning criteria, so a winner can and should be chosen with the "accept" button \$\endgroup\$
    – daniero
    Commented Jan 17, 2018 at 7:14
2
\$\begingroup\$

1 - They are something we should try to mitigate, but not prevent

I am very much in support of posters of these questions adding solutions of their own to their questions in order to give a baseline. I agree with the opinions stated that throwaway answers designed to just "get the ball rolling," aren't often very interesting and aren't deserving of the attention they get. But I don't think that we should delete them.

It's been mentioned before, with regard to trivial solutions, that taking action against solutions that aren't particularly "good" for some definition thereof isn't ideal. Votes will, unfortunately, always tend toward certain answers, whether it be because they're early, funny, or in an aptly-named language. You can probably think of more examples, too (the easiest way would be to look at the most popular challenges). Trying to fight the will of the people by taking direct action against them is, in my opinion, foolish.

Preemptive action, to me, is the best solution, but it is of course not perfect. For a problem like this, you take away the possibility for someone posting a simple baseline by writing one yourself and including it in your post. Does that prevent someone from writing one anyways? No. There might be a slightly less simple baseline that they implement or maybe they'll post the baseline in another language, but we shouldn't be taking pains to account for all of these possibilities.

I feel like to some extents there is an implicit (at least in the way I am interpreting it, feel free to argue with me) claim that these posts don't "deserve" their votes or attention. Trying to make sure answers get votes proportional to the ones they "deserve" is ridiculous. HNQ-drawn voters have, for better or for worse, votes as powerful as our own. It might feel unfair to the regulars -- I for one am displeased to see a trivial solution in a golfing language overshadow a creative solution in another -- but the votes have been cast.

2 - On elitism

I also wanted to second what @Fatalize had to say and give my own input regarding elitism. Let me preface this by saying that I like this site a lot. There have been tons of interesting challenges I've really enjoyed answering and answers I've really enjoyed reading. But I do think there are some things that make it difficult for newcomers.

I feel like PPCG is a bit insular as a community, which isn't too much of a problem for most users. The standard that submissions and answers are held to and the pretty strict policing, in fact, are in part what makes the content on this site so good. I see similar quality control on other Stack Exchange sites. However, it can be daunting to the uninitiated, which I feel is spoken to by the number of new questions and answers we get that need rephrasing, edits, and sometimes even deletion. If I recall correctly, I was able to join in with relative ease because I lurked enough to pick up on the sort of "culture," for lack of a better word, but not everyone is as careful as I am, nor should they have to be.

In answers, there's the way we format them to include bytecount, the way we count bytes for some languages, the (optional, but encouraged) use of TIO, explanations, so on and so forth. In questions, there's the use of the sandbox, standard guidelines, policies on default exceptions, etc. I'm sure most of this is covered in the guide to the site, but this doesn't make it any less daunting. And let's not forget "culture"-specific things that people need to learn (what golfing and esoteric languages are, references such as "crossed out 44," outgolfing Dennis, etc.).

There's nothing wrong with any of this, and I think as a whole the community is welcoming and kind, but it's still worth bearing in mind how PPCG might appear to those who don't visit it every day. Any transition to a new community requires some getting used to, but for a public community like PPCG I think it's best to try to be cognizant of this public presence, especially of the people who might not be so in the know. I feel like I'm sort of on the outskirts of the community of PPCG, and I've been visiting the site semi-frequently for over two years now (wow). Keep the guys who are seeing it for the first time in mind, too.

Not that I think PPCG has a significant problem with this, I've said before and I'll say again that I think for the most part it's welcoming. But in my opinion, trying to police things that get more and more specific may make it harder for newer members to contribute.

Etc

Sorry for the wall of text; I realize I wrote a lot for this. I didn't really intend to, I guess it's something I'm more passionate about than I thought.

I welcome any argument against what I have to say.

\$\endgroup\$
1
  • 7
    \$\begingroup\$ "but not everyone is as careful as I am": this is true; "nor should they have to be": this is not. Lurking until you have a feel for the culture is part of netiquette which I think probably predates the web. (Usenet historians may be able to find an actual date). \$\endgroup\$ Commented Jan 17, 2018 at 11:37
0
\$\begingroup\$

I think there are two separate issues to consider when trying to determine if an answer is a serious contender:

  1. The approach

  2. The implementation

In , the approach can be as nonoptimal as anything, and likely nobody will complain. If the code that implements the approach is not even trivially golfed, almost everyone will agree that the answer is not a serious contender.

As I see it, the case with the two particular answers in question is that one side views the approach as trivial, highly nonoptimal, and therefore not a serious contender, whereas the other side points out that the implementation is as well-optimized as it can be with that approach, and that there is no well-defined line after which an approach becomes a non-serious one.

As the supporters argue, both of these answers can be thought of as a sort of a baseline for what kind of a score an answer should at least be able to attain. However, like the property of being a serious contender, it is more or less subjective to decide what is a good baseline. Also, I'm not convinced that a solution that marks a baseline should be posted as an answer.

My suggestion is, that for optimization s, like the ones in question, where the score doesn't depend on the language used to write the answer, the challenge asker should define a baseline better than which an answer has to score to be considered a serious contender.

The baseline should probably be based on the score of a trivial approach, much like the two answers at issue here, perhaps slightly buffered against similar but better implemented trivial approaches. The baseline/limit should be stated in the question, perhaps along with an implementation of it. If it would crowd the question body too much, the implementation could be posted as a community-wiki answer by the challenge poster.

\$\endgroup\$
5
  • 3
    \$\begingroup\$ the challenge asker should define a baseline better than which an answer has to score to be considered a serious contender. That would stop languages like brain-flak, brainfuck, or unary from ever competing in kolmogorov-complexity challenges like this ever again. Even if they pick a good algorithm, the extreme verbosity of these languages would make them unable to compete. \$\endgroup\$
    – DJMcMayhem
    Commented Jan 15, 2018 at 17:36
  • \$\begingroup\$ @DJMcMayhem Perhaps the baseline/limit should only be applied to the part of the score that is not dependent on the length of the code? \$\endgroup\$
    – Steadybox
    Commented Jan 15, 2018 at 17:42
  • \$\begingroup\$ @DJMcMayhem In the Starry Night challenge the score doesn't depend on the length of the code (although there is a maximum length), and in the Moby Dick challenge the E part of the score doesn't depend on code length either. \$\endgroup\$
    – Steadybox
    Commented Jan 15, 2018 at 17:44
  • \$\begingroup\$ So this is {[xnor]'s answer} + {it is posted by the asker}? What's the point in requiring the latter, if it's CW anyway? \$\endgroup\$
    – DELETE_ME
    Commented Jan 16, 2018 at 1:18
  • \$\begingroup\$ @user202729 It is somewhat subjective as to what counts as a baseline. If the challenge explicitly defines it, it will be clear which answers score better than the baseline and which don't. \$\endgroup\$
    – Steadybox
    Commented Jan 16, 2018 at 13:01
0
\$\begingroup\$

Baseline for can be an answer

Baseline for usually behave as valid answer to allow a reasonable comparing. And I do see case where score worse than baseline is seen, maybe because of slow language or something.

It's of course also ok to place it only in question, it's up to OP.

\$\endgroup\$
-1
\$\begingroup\$

Require them to be posted CW

Only allow baseline submissions to be posted Community Wiki, where the poster doesn't get rep and everyone is encouraged to edit.

Baseline answers are a helpful resource for coders to check if their method actually does decently. It saves everyone the world of writing the same basic code to compare, and its boilerplate can perhaps be used as a starting point.

I think CW status fits well. It's not really a submission that deserves rep (much less the kajillions given by HNQ voters), but a community resource that any kind poster can write or contribute to, including the question asker. And the CW label makes posters less shy about editing the formatting, fixing the code, adding explanations, comparing alternatives, etc.

To make sure I'm answering the question, I don't see baseline submissions as serious contenders at all, otherwise the phrase "serious contender" is basically meaningless, but there's a good niche for them as CW posts.

\$\endgroup\$
3
  • 4
    \$\begingroup\$ Not bad. The only remaining problem is that the post still stick to the top, but in this case "sort by active" will help. \$\endgroup\$
    – DELETE_ME
    Commented Jan 16, 2018 at 1:01
  • 7
    \$\begingroup\$ Community Wiki is not a rep waiver. \$\endgroup\$
    – user45941
    Commented Jan 16, 2018 at 2:07
  • \$\begingroup\$ Sacrificing the rep would still leave the answer at the top of the page if it got the most votes. \$\endgroup\$ Commented Jan 25, 2018 at 21:53
-11
\$\begingroup\$

Programmers can be lazy sometimes

Let people write the answer they want to (if these answers run as specified in the challenge). People are free to downvote any solution but not to delete them. This is because you could not see something in their solution that the authors does see.

This has only two exceptions. 1) If the author of these baseline submissions makes more than 1 solution a day (or has a different account on PPCG and write more than 1/day such answer on multiple accounts). 2) Or if the solution's length > 3000 bytes (or a convenient number of bytes). The problem is too many posts of this kind, and space reserved in memory to store them.

\$\endgroup\$
2
  • \$\begingroup\$ Thank you to totallyhuman... \$\endgroup\$
    – user58988
    Commented Jan 16, 2018 at 21:53
  • \$\begingroup\$ The argument that an apparently trivial answer may contain something we just can't appreciate runs into problems. If we never assume that we are capable of judging an answer to be out of place, then we end up with floods of ASCII art cats that we're afraid to delete just in case they are secretly relevant to the challenge. I accept that we can't draw an objective line, but I don't think that's a reason to just accept everything. \$\endgroup\$ Commented Jan 25, 2018 at 21:57

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .