13

I'm doing my PhD in computer science. I had a few tries for top rank conferences like IJCAI, CVPR and ICML but my paper was rejected from them and consequently I sent the papers to second tier conferences for the sake of having publications.

However when I compare my work with some accepted papers (in previous years or current year) in those conferences, I see similar technical level or novelty in them comparing to mines.

For example reviewers more often mentioned I just added something to the pre-existed approaches or combined already existing things from the literature to shape an algorithm, which I noticed is a common case notable number of papers in the top computer science conferences.

One thing I do not have in my papers is bringing unnecessary but sophisticated math jargon to show something important is happening. For example, I noticed in other's papers although the contribution was for instance adding an extra objective term in the optimization framework, they used complicated algebraic representations or visualization techniques to related their contribution to some bigger underlying phenomena! But the short answer is they did it because it obviously should make the outcome result better.

From another point I'm not sure if I try another top conference (with a new work) and my paper gets rejected, then would it damage my or my supervisor's reputation in the field?

So although I still have great enthusiasm to try for the next relevant top conference, I got a feeling that I'm missing an important ingredient from the recipe!

PS: My advisor is not so competent in this regard and I'm among the first group of his PhD students and you can guess the rest.

9
  • 7
    What does your advisor say? (You did ask your advisor, right?)
    – Mad Jack
    Commented Dec 28, 2017 at 21:18
  • 2
    Are there no reviews? They usually tell you what the problem is.
    – user64845
    Commented Dec 28, 2017 at 21:34
  • 1
    If your advisor doesn't care and/or is incompetent, find a new one.
    – Karl
    Commented Dec 28, 2017 at 23:14
  • 6
    Generally speaking, you are biased when you compare your own work to other competing work. So your work may look better to you than it actually is.
    – GEdgar
    Commented Dec 29, 2017 at 1:45
  • 2
    How is relating to the bigger picture unnecessary?! It is one of the first things I look for when I review. It shows that the authors know what they are talking about, why the problem is interesting, how it relates to other problems, how they can be useful to other problems, and provides a background for the phenomenology of the problem. "Adding an extra term" is generally only useful as long as you can provide a reasonable explanation to why it works, even if it was initially discovered by pure chance, given that it helps me understand its limitations.
    – FBolst
    Commented Dec 29, 2017 at 2:27

4 Answers 4

15

One thing I do not have in my papers is bringing unnecessary but sophisticated math jargon to show something important is happening. For example, I noticed in other's papers although the contribution was for instance adding an extra objective term in the optimization framework, they used complicated algebraic representations or visualization techniques to related their contribution to some bigger underlying phenomena! But the short answer is they did it because it obviously should make the outcome result better.

I can think of several ways to interpret this observation, some more charitable than others.

  • Computer science research papers are not about the results per se, but about the techniques developed to attain those results. It's possible that in the revirewers' eyes, the apparent connection to the larger underlying phenomenon is the main contribution of the paper. The main contribution is not merely that the outcome is better, but at least an attempted explanation of why the outcome is better.

  • People are bad judges of the quality their own research results. More to the point: Your opinion of your research is irrelevant; only your peers' opinions actually matter. If they find your papers less interesting than others, then by definition your papers are less interesting.

  • Emerson was wrong. The world will not beat a path to your door if you merely build a better mousetrap. You have to sell your results. For your papers to be accepted, they must follow the cultural expectations of your audience. If the IJCAI/ICML/CVPR audience expects a certain level of mathematical sophistication, then papers that do not display that sophistication are less likely to be accepted, even if that sophistication is unnecessary.

  • It is very easy to confuse mathematical depth/complexity with importance/difficulty. Top theoretical computer science conferences have a reputation for preferring more mathematically "difficult" papers to papers using more elementary techniques, and lots of reviewers assume without justification that any result that appears straightforward in retrospect must have been easy to derive. But unless someone proves that P=NP, "trivial" is different from "nondeterministically trivial".

  • Because CVPR, ICML, and IJCAI are enormous conferences with low acceptance rates, acceptance decisions have high variance. Given the breadth and complexity of the field, the ridiculous number of submissions, and the limited time for reviews, it is impossible for the program committee to make fully informed judgements about every submission. There is an element of randomness even at smaller conferences, but for larger prestigious conferences, the randomness overwhelms the actual "signal". In 2014, NIPS ran an experiment with two independent program committees; most papers accepted by one committee were rejected by the other. It amazes me that anyone found this result surprising.

  • PC members are apes; they do apey things. As in any other large community, there are sub-communities within machine learning that prefer their own papers to others. Even though submissions are blinded, reviewers can identify tribal affiliation—if only subconsciously—by writing style, citation patterns, choice of method, choice of data set, or choice of evaluation metrics. If you're not in the right tribe, your papers are less likely to be accepted.

4
  • What does PC member stand for?
    – padawan
    Commented Jan 13, 2018 at 23:14
  • 1
    PC = program committee, the group charged with deciding which of far too many submissions to accept for publication in the proceedings and presentation at the conference.
    – JeffE
    Commented Jan 14, 2018 at 3:58
  • I know you deliberately exaggerated the situation, but why call them apes?
    – padawan
    Commented Jan 14, 2018 at 16:29
  • 1
    @padawan Humans are apes.
    – JeffE
    Commented Jan 14, 2018 at 18:47
6

they used complicated algebraic representations or visualization techniques to related their contribution to some bigger underlying phenomena! But the short answer is they did it because it obviously should make the outcome result better.

This (relating your contribution to the bigger picture) is an important part in papers in our fields, so I guess in yours, too. The reviewers/readers don't know the bigger picture of your specific research, you need to tell them. Sounds like you don't, whereas others - while doing similar quality research - do.

2

One thing I've experienced is not to 'overfit' to accepted papers that have been just published. It is very easy to look at a published paper and say "this isn't really very novel! I can do this too!", and then proceed to use this as a benchmark for minimum viable novelty. I also noticed when aiming at the borderline, papers tend to be rejected more. Moreover, there is also the time element. Technical novelty is w.r.t to time.

Then again, I don't think it's healthy to assume one's work is up to the novelty bar. You may think your work is actually novel but it might not. Reviews and acceptance/rejection results tell the truth, to some extent. There is some variance in results and sometimes luck is not on your side. But I always believe a reasonably okay paper should get into a top conference after 1-2 more tries after it has been rejected once.

What kinds of scores do you normally get? There are many different extents to how a paper gets rejected. There is a big difference between 1 strong accept + 2 reject and 3 x weak rejects.

1

This question has its answer in the responses from the reviewers.

At the end, reviewers and the editor are the ones who decide whether a submission is good enough to appear in a conference/proceedings/journal. If they write that you added something to the pre-existing approaches, then it means that your submission lacks novelty.

Also, maybe unnecessary but sophisticated math jargon is the formal definition of the problem whose absence causes terrible ambiguities.

Your reputation being damaged is a very rare case. However, if you keep submitting papers that lack novelty and technical language, then you might be blacklisted. Your supervisor's reputation has nothing to do with this.

2
  • 1
    As i tried to say, i noticed many cases of accepted papers in the same or equivalent top conferences with same level of technical novelty, especially adding something to the pre-existed approached in the literature!
    – Bob
    Commented Dec 28, 2017 at 21:40
  • 3
    But unfortunately you are not the judge of technicality and novelty, reviewers are.
    – padawan
    Commented Dec 28, 2017 at 21:48

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .