Skip to main content
added 1 character in body
Source Link
Faheem Mitha
  • 5k
  • 3
  • 26
  • 39

Because some researchers do not like to think about the real world and reviewers do not want the hassle.

(WhatsWhat's next is a bit of a rant)

I've recently done a survey of a specific type of geometry related algorithms. In all the papers the program was described as working perfectly but once I requested the source code from about a dozen authors things became ugly.

50% of the software was missing important advertised features. For example the software would only work in 2D while the paper showed 3D examples. (Which in my case really makes things a lot more difficult). After inquiring why these features were missing they had usually never been implemented or had been implemented but proved unstable/non-working. To be clear: it was 100% impossible to generate the results shown in the paper with software that was often even improved after the paper was released.

75% of the software wouldn't work perfectperfectly in normal cases. In my case this usually was because the algorithm was designed with 'perfect numbers' in mind but the algorithm was implemented using normal floating point numbers and thus had representation problems which resulted in large errors. Only a few papers mentioned these problems and only two tried to (partially) address them.

85% of the software wouldn't work in scenarios specifically designed to find problem cases. LetsLet's be honesthonest; if a 'mere' student can find a scenario in a few weeks that totally breaks your algorithm you would probably already know about this.

Not supplying code makes it possible to lie and to my disgust (I'm new to the academic world) this is done extremely often. My supervisor wasn't even surprised. However testing code is a lot of work so this behavior will probably go unchecked for a while longer.

Because some researchers do not like to think about the real world and reviewers do not want the hassle.

(Whats next is a bit of a rant)

I've recently done a survey of a specific type of geometry related algorithms. In all the papers the program was described as working perfectly but once I requested the source code from about a dozen authors things became ugly.

50% of the software was missing important advertised features. For example the software would only work in 2D while the paper showed 3D examples. (Which in my case really makes things a lot more difficult). After inquiring why these features were missing they had usually never been implemented or had been implemented but proved unstable/non-working. To be clear: it was 100% impossible to generate the results shown in the paper with software that was often even improved after the paper was released.

75% of the software wouldn't work perfect in normal cases. In my case this usually was because the algorithm was designed with 'perfect numbers' in mind but the algorithm was implemented using normal floating point numbers and thus had representation problems which resulted in large errors. Only a few papers mentioned these problems and only two tried to (partially) address them.

85% of the software wouldn't work in scenarios specifically designed to find problem cases. Lets be honest if a 'mere' student can find a scenario in a few weeks that totally breaks your algorithm you would probably already know about this.

Not supplying code makes it possible to lie and to my disgust (I'm new to the academic world) this is done extremely often. My supervisor wasn't even surprised. However testing code is a lot of work so this behavior will probably go unchecked for a while longer.

Because some researchers do not like to think about the real world and reviewers do not want the hassle.

(What's next is a bit of a rant)

I've recently done a survey of a specific type of geometry related algorithms. In all the papers the program was described as working perfectly but once I requested the source code from about a dozen authors things became ugly.

50% of the software was missing important advertised features. For example the software would only work in 2D while the paper showed 3D examples. (Which in my case really makes things a lot more difficult). After inquiring why these features were missing they had usually never been implemented or had been implemented but proved unstable/non-working. To be clear: it was 100% impossible to generate the results shown in the paper with software that was often even improved after the paper was released.

75% of the software wouldn't work perfectly in normal cases. In my case this usually was because the algorithm was designed with 'perfect numbers' in mind but the algorithm was implemented using normal floating point numbers and thus had representation problems which resulted in large errors. Only a few papers mentioned these problems and only two tried to (partially) address them.

85% of the software wouldn't work in scenarios specifically designed to find problem cases. Let's be honest; if a 'mere' student can find a scenario in a few weeks that totally breaks your algorithm you probably already know about this.

Not supplying code makes it possible to lie and to my disgust (I'm new to the academic world) this is done extremely often. My supervisor wasn't even surprised. However testing code is a lot of work so this behavior will probably go unchecked for a while longer.

Source Link
Roy T.
  • 277
  • 1
  • 9

Because some researchers do not like to think about the real world and reviewers do not want the hassle.

(Whats next is a bit of a rant)

I've recently done a survey of a specific type of geometry related algorithms. In all the papers the program was described as working perfectly but once I requested the source code from about a dozen authors things became ugly.

50% of the software was missing important advertised features. For example the software would only work in 2D while the paper showed 3D examples. (Which in my case really makes things a lot more difficult). After inquiring why these features were missing they had usually never been implemented or had been implemented but proved unstable/non-working. To be clear: it was 100% impossible to generate the results shown in the paper with software that was often even improved after the paper was released.

75% of the software wouldn't work perfect in normal cases. In my case this usually was because the algorithm was designed with 'perfect numbers' in mind but the algorithm was implemented using normal floating point numbers and thus had representation problems which resulted in large errors. Only a few papers mentioned these problems and only two tried to (partially) address them.

85% of the software wouldn't work in scenarios specifically designed to find problem cases. Lets be honest if a 'mere' student can find a scenario in a few weeks that totally breaks your algorithm you would probably already know about this.

Not supplying code makes it possible to lie and to my disgust (I'm new to the academic world) this is done extremely often. My supervisor wasn't even surprised. However testing code is a lot of work so this behavior will probably go unchecked for a while longer.