363
votes

This is definitely subjective, but I'd like to try to avoid it becoming argumentative. I think it could be an interesting question if people treat it appropriately.

The idea for this question came from the comment thread from my answer to the "What are five things you hate about your favorite language?" question. I contended that classes in C# should be sealed by default - I won't put my reasoning in the question, but I might write a fuller explanation as an answer to this question. I was surprised at the heat of the discussion in the comments (25 comments currently).

So, what contentious opinions do you hold? I'd rather avoid the kind of thing which ends up being pretty religious with relatively little basis (e.g. brace placing) but examples might include things like "unit testing isn't actually terribly helpful" or "public fields are okay really". The important thing (to me, anyway) is that you've got reasons behind your opinions.

Please present your opinion and reasoning - I would encourage people to vote for opinions which are well-argued and interesting, whether or not you happen to agree with them.

0

407 Answers 407

1
2
3 4 5
14
146
votes

Most professional programmers suck

I have come across too many people doing this job for their living who were plain crappy at what they were doing. Crappy code, bad communication skills, no interest in new technology whatsoever. Too many, too many...

0
115
votes

A degree in computer science does not---and is not supposed to---teach you to be a programmer.

Programming is a trade, computer science is a field of study. You can be a great programmer and a poor computer scientist and a great computer scientist and an awful programmer. It is important to understand the difference.

If you want to be a programmer, learn Java. If you want to be a computer scientist, learn at least three almost completely different languages. e.g. (assembler, c, lisp, ruby, smalltalk)

12
  • 2
    The first one is not really controversial, at least not in the CS field.
    – wds
    Commented Jan 3, 2009 at 19:06
  • 6
    Java doesn't really teach you how to be a real programmer, since there's so much you can't learn with it. It's like building a car with legos. Commented Jan 6, 2009 at 1:58
  • 7
    I may agree with the first point, but saying that knowing only Java could make a programmer ..... that's a crime, punishable with death!!!
    – hasen
    Commented Jan 7, 2009 at 2:12
  • 1
    @MusiGenesis: I've actually just completed my degree in Engineering (Software). I'm certainly not a computer scientist, and I don't want to be.
    – ajlane
    Commented Mar 9, 2009 at 12:43
  • 3
    I disagree that CS does not teach you to be a programmer. It DOES and SHOULD do that - incidentally by teaching multiple languages, not one only - but that's not ALL it should do. CS degrees should also teach you about as many different areas of CS as possible, eg basic programming, functional languages, databases, cryptography, AI, language engineering (ie compilers/parsing), architecture and math-leaning areas like computer graphics and various algorithms. Commented May 10, 2009 at 0:04
101
votes

SESE (Single Entry Single Exit) is not law

Example:

public int foo() {
   if( someCondition ) {
      return 0;
   }

   return -1;
}

vs:

public int foo() {
   int returnValue = -1;

   if( someCondition ) {
      returnValue = 0;
   }

   return returnValue;
}

My team and I have found that abiding by this all the time is actually counter-productive in many cases.

13
  • 4
    Moreover, an exception is just another exit point. When functions are short and error-safe (-> finally, RAII), there is no need to follow SESE. Commented Jan 7, 2009 at 14:01
  • 5
    Agreed. I cringe at the 100+ loc methods I've seen that carry a return value from the first line all the way to the bottom just to adhere to SESE. There is something to be said for exiting when you find the answer. Commented Jan 9, 2009 at 19:14
  • 2
    Wait people actually do this? Why can't you just search for "return"?
    – nosatalian
    Commented May 31, 2009 at 1:54
  • 9
    I think SESE is a great example of a solution in search of a problem Commented Oct 22, 2009 at 0:24
  • 3
    SESE dates back to 1960s and structured programming. it made a lot of sense then. single entry is pretty much guaranteed today, clinging to single exit just betrays low iq. Commented Dec 15, 2009 at 3:35
100
votes

C++ is one of the WORST programming languages - EVER.

It has all of the hallmarks of something designed by committee - it does not do any given job well, and does some jobs (like OO) terribly. It has a "kitchen sink" desperation to it that just won't go away.

It is a horrible "first language" to learn to program with. You get no elegance, no assistance (from the language). Instead you have bear traps and mine fields (memory management, templates, etc.).

It is not a good language to try to learn OO concepts. It behaves as "C with a class wrapper" instead of a proper OO language.

I could go on, but will leave it at that for now. I have never liked programming in C++, and although I "cut my teeth" on FORTRAN, I totally loved programming in C. I still think C was one of the great "classic" languages. Something that C++ is certainly NOT, in my opinion.

Cheers,

-R

EDIT: To respond to the comments on teaching C++. You can teach C++ in two ways - either teaching it as C "on steroids" (start with variables, conditions, loops, etc), or teaching it as a pure "OO" language (start with classes, methods, etc). You can find teaching texts that use one or other of these approaches. I prefer the latter approach (OO first) as it does emphasize the capabilities of C++ as an OO language (which was the original design emphasis of C++). If you want to teach C++ "as C", then I think you should teach C, not C++.

But the problem with C++ as a first language in my experience is that the language is simply too BIG to teach in one semester, plus most "intro" texts try and cover everything. It is simply not possible to cover all the topics in a "first language" course. You have to at least split it into 2 semesters, and then it's no longer "first language", IMO.

I do teach C++, but only as a "new language" - that is, you must be proficient in some prior "pure" language (not scripting or macros) before you can enroll in the course. C++ is a very fine "second language" to learn, IMO.

-R

'Nother Edit: (to Konrad)

I do not at all agree that C++ "is superior in every way" to C. I spent years coding C programs for microcontrollers and other embedded applications. The C compilers for these devices are highly optimized, often producing code as good as hand-coded assembler. When you move to C++, you gain a tremendous overhead imposed by the compiler in order to manage language features you may not use. In embedded applications, you gain little by adding classes and such, IMO. What you need is tight, clean code. You can write it in C++, but then you're really just writing C, and the C compilers are more optimized in these applications.

I wrote a MIDI engine, first in C, later in C++ (at the vendor's request) for an embedded controller (sound card). In the end, to meet the performance requirements (MIDI timings, etc) we had to revert to pure C for all of the core code. We were able to use C++ for the high-level code, and having classes was very sweet - but we needed C to get the performance at the lower level. The C code was an order of magnitude faster than the C++ code, but hand coded assembler was only slightly faster than the compiled C code. This was back in the early 1990s, just to place the events properly.

-R

24
  • 1
    And I think that C++ is superior to C in every way, except that it unfortunately was designed to be “backwards” compatible to C. Commented Jan 3, 2009 at 11:57
  • 8
    I think C++ is a good example of "design by committee" done RIGHT. It's a mess in many ways, and for many purposes, it's a lousy languages. But if you bother to really learn it, there's a remarkably expressive and elegant language hidden within. It's just a shame that few people discover it. Commented Jan 4, 2009 at 1:01
  • 5
    Okay, if C++ code was ten times slower than C code, what sort of Mickey Mouse compilers were you using? Or what idiotic code conventions were you required to use? Were you asked to do exception specifications, for example (almost always a bad idea)? Commented Jan 9, 2009 at 14:43
  • 3
    you don't have to use those features. if you only use the C subset, then C++ is equally fast as C. then, you can selectively pick those C++ features you like. some vector sugar here, some other stuff there. isn't that nice? Commented Jan 21, 2009 at 5:05
  • 4
    -1. C++ is still the most powerful multi-paradigm widely available language there is. It's the most adaptable of them all, therefore it can solve many different problems, which in some applications is very useful. It might not be best at each specific thing, but overall, it's seldom a really bad choice.
    – Macke
    Commented Aug 21, 2009 at 18:14
94
votes

You must know how to type to be a programmer.

It's controversial among people who don't know how to type, but who insist that they can two-finger hunt-and-peck as fast as any typist, or that they don't really need to spend that much time typing, or that Intellisense relieves the need to type...

I've never met anyone who does know how to type, but insists that it doesn't make a difference.

See also: Programming's Dirtiest Little Secret

25
  • 4
    Nemanja->"no difference whatsoever"?! I just got 70wpm on an online test. I could see how someone could scrape by at 20-30wpm, but if they are using two fingers, plugging away at 5wpm (yes, I've worked with people like that), it's holding them back.
    – KeyserSoze
    Commented Jan 2, 2009 at 22:03
  • 7
    No difference whatsoever. I don't even know what is my current wpm level, because i completely lost interest in it. Surely, it is useful to type quickly when you are writing documentation or ansering e-mails, but for coding? Nah. Thinking takes time, typing is insignificant. Commented Jan 2, 2009 at 22:12
  • 2
    Well, if your typing is so bad that you are thinking about typing, that's time you could have spent thinking about the problem you are working on. And if your typing speed is a bottleneck in recording ideas, you may have to throttle your thinking until your output buffer is flushed.
    – KeyserSoze
    Commented Jan 3, 2009 at 1:01
  • 2
    @Nemanja Trifunovic - I hear what you are saying but, respectfully, I think you are dead wrong. Being able to type makes a huge difference.
    – jwpfox
    Commented Jan 3, 2009 at 13:43
  • 2
    +1. I repeatedly see people make tons of mistake because they are watching their keyboard instead of watching the code on their screen. Most common are syntax and code-formatting issues, but also real bugs that aren't caught by the compiler.
    – flodin
    Commented Feb 28, 2009 at 10:52
89
votes

A degree in Computer Science or other IT area DOES make you a more well rounded programmer

I don't care how many years of experience you have, how many blogs you've read, how many open source projects you're involved in. A qualification (I'd recommend longer than 3 years) exposes you to a different way of thinking and gives you a great foundation.

Just because you've written some better code than a guy with a BSc in Computer Science, does not mean you are better than him. What you have he can pick up in an instant which is not the case the other way around.

Having a qualification shows your commitment, the fact that you would go above and beyond experience to make you a better developer. Developers which are good at what they do AND have a qualification can be very intimidating.

I would not be surprized if this answer gets voted down.

Also, once you have a qualification, you slowly stop comparing yourself to those with qualifications (my experience). You realize that it all doesn't matter at the end, as long as you can work well together.

Always act mercifully towards other developers, irrespective of qualifications.

12
  • "degree in Computer Science or other IT area DOES make you more well rounded" ... "realize that it all doesn't matter at the end, as long as you can work well together" <- sounds a tiny bit inconsistent and self-contradictory.
    – dreftymac
    Commented Jan 4, 2009 at 4:48
  • 4
    Agree - qualifications are indicators of commitment. They can be more but if even if that's all they are then they have value. It is only those without pieces of paper who decry them. Those with them know the limits of their value but know their value too.
    – jwpfox
    Commented Jan 4, 2009 at 11:35
  • 1
    A degree in ANY area (except maybe post-modern literary criticism) makes you a more well-rounded programmer, especially if it's in mathematics or science or engineering. Comp Sci and IT degrees tend to have incredibly narrow scope and focus. Commented Jan 13, 2009 at 17:18
  • 3
    In the spirit of healthy discussion I'll just say that I vehemently disagree (and I've got one). Past deliverables shows commitment, not that you lived somewhere for 4 years and read some books. Commented Jan 23, 2009 at 22:20
  • 6
    I don't believe in degrees as measurements of value or skill, but studying at a university gives you the opportunity to learn the foundations of many different fields that can be useful to you in a work situation. I'm doubtful if being able to graduate is an acceptable proof that you've learned anything, but I know that you CAN learn a lot of useful skills, if you're ambitious enough. Commented May 5, 2009 at 21:11
89
votes

Lazy Programmers are the Best Programmers

A lazy programmer most often finds ways to decrease the amount of time spent writing code (especially a lot of similar or repeating code). This often translates into tools and workflows that other developers in the company/team can benefit from.

As the developer encounters similar projects he may create tools to bootstrap the development process (e.g. creating a DRM layer that works with the company's database design paradigms).

Furthermore, developers such as these often use some form of code generation. This means all bugs of the same type (for example, the code generator did not check for null parameters on all methods) can often be fixed by fixing the generator and not the 50+ instances of that bug.

A lazy programmer may take a few more hours to get the first product out the door, but will save you months down the line.

12
  • 16
    You are mistaken "lazy" for "clever". A clever programmer will actually have to work less, which may make him/her look "lazy". Commented Jan 26, 2009 at 10:16
  • @Diego, tnx, changed it to make it more appropriate. Commented Jan 26, 2009 at 14:46
  • 3
    I agree with what you're trying to say, but I disagree with your definition of lazy. A lazy programmer does not look ahead; they will copy-paste a block of code between 4 different functions if it's the easiest thing to do at the time. Commented May 10, 2009 at 0:38
  • 7
    lazy/clever programmer... Programmers have to be clever to be reasonable programmers, so that's a given. A lazy programmer picks the shortest/easiest path to the solution of a problem. And this is not about copy/pasting the same code snippet 400 times, but rather finding a way to avoid copying the same code 400 times. That way the code can be easily changed in once place! The lazy programmer likes to only change the code in once place ;) The lazy programmer also knows that the code is likely to be changed several times. And the lazy programmer just hate finding the 400 snippets twice.
    – Zuu
    Commented Jun 15, 2009 at 11:33
  • 1
    Though I agree with your explanation Lazy it isn't really the best word to describe this. Lazy - Resistant to work or exertion; I know a lazy programmer that is too lazy to create a bat file to automate a simple task that I see him type out all the time. If he would just spend a little time to make a few bat files it would increase his productivity. It turns out he is a good developer however he could be even better.
    – gradbot
    Commented Oct 13, 2009 at 17:23
87
votes

Don't use inheritance unless you can explain why you need it.

7
  • Inheritance is the second strongest relationship in C++ and the strongest relationship in most other languages. It strongly couples your code with that of your ascendant. If you can just use it through interfaces go for it. Prefer composition over inheritance always. Commented Jan 5, 2009 at 16:35
  • Most uses of inheritance as a form of reuse, overriding whatever is needed to change. They generally don't know/care if they violate LSP, and can achieve what they need with composition. Commented Jan 9, 2009 at 15:47
  • 2
    I tend to think that delegation is cleaner in most cases where people use inheritance (esp. lib development) because: - abstraction is better - coupling is looser - maintenance is easier Delegation defines a contract between the delegating and the delegate that is easier to enforce among versions.
    – fbonnet
    Commented Jan 15, 2009 at 8:50
  • He's not saying don't use inheritance at all, just don't use it if you can't explain why you need it. If you're wanting to code an OO application and think throwing a little inheritance in here and there is just gonna make it OO, then you're dumb and should be fired from the ability program.
    – Wes P
    Commented Jan 29, 2009 at 20:37
  • 8
    You should expand that to: "Don't ever code anything that you can't explain." Everything you do in code should have a reason.
    – Oorang
    Commented Dec 11, 2009 at 2:10
85
votes

The world needs more GOTOs

GOTOs are avoided religiously often with no reasoning beyond "my professor told me GOTOs are bad." They have a purpose and would greatly simplify production code in many places.

That said, they aren't really necessary in 99% of the code you'll ever write.

36
  • 4
    I agree. Not necessarily that we need more gotos, but that sometimes programmers go to ridiculous lengths to avoid them: such as creating bizarre constructs like: do { ... break; ... } while (false); to simulate a goto while pretending not to use one.
    – Ferruccio
    Commented Jan 2, 2009 at 13:20
  • 4
    I have seen only 1 example of a good usage for the last 5 years, so make it 99,999 percent.
    – Paco
    Commented Jan 2, 2009 at 13:51
  • 10
    I've never had to use a goto for anything. Anytime when I actually thought goto might be a good idea, it was instead an indicator that things weren't flowing properly. Commented Jan 2, 2009 at 15:06
  • 27
    +1 for controversy :). Oh, I know what GOTO's are, I started with BASIC like many of you. We need more GOTO's like we need DOS 8.3 filenames, plain ASCII encoding, FAT 16 filesystems, and 5 1/4 inch floppies. Commented Jan 7, 2009 at 8:26
  • 10
    This thread considered harmful. Edsger Dijkstra is rolling in his grave. :) Commented Mar 23, 2009 at 14:07
80
votes

I've been burned for broadcasting these opinions in public before, but here goes:

Well-written code in dynamically typed languages follows static-typing conventions

Having used Python, PHP, Perl, and a few other dynamically typed languages, I find that well-written code in these languages follows static typing conventions, for example:

  • Its considered bad style to re-use a variable with different types (for example, its bad style to take a list variable and assign an int, then assign the variable a bool in the same method). Well-written code in dynamically typed languages doesn't mix types.

  • A type-error in a statically typed language is still a type-error in a dynamically typed language.

  • Functions are generally designed to operate on a single datatype at a time, so that a function which accepts a parameter of type T can only sensibly be used with objects of type T or subclasses of T.

  • Functions designed to operator on many different datatypes are written in a way that constrains parameters to a well-defined interface. In general terms, if two objects of types A and B perform a similar function, but aren't subclasses of one another, then they almost certainly implement the same interface.

While dynamically typed languages certainly provide more than one way to crack a nut, most well-written, idiomatic code in these languages pays close attention to types just as rigorously as code written in statically typed languages.

Dynamic typing does not reduce the amount of code programmers need to write

When I point out how peculiar it is that so many static-typing conventions cross over into dynamic typing world, I usually add "so why use dynamically typed languages to begin with?". The immediate response is something along the lines of being able to write more terse, expressive code, because dynamic typing allows programmers to omit type annotations and explicitly defined interfaces. However, I think the most popular statically typed languages, such as C#, Java, and Delphi, are bulky by design, not as a result of their type systems.

I like to use languages with a real type system like OCaml, which is not only statically typed, but its type inference and structural typing allow programmers to omit most type annotations and interface definitions.

The existence of the ML family of languages demostrates that we can enjoy the benefits of static typing with all the brevity of writing in a dynamically typed language. I actually use OCaml's REPL for ad hoc, throwaway scripts in exactly the same way everyone else uses Perl or Python as a scripting language.

5
  • 7
    100% right. If only the Python developers would finally acknowledge this and change their otherwise exceptional language accordingly. Thanks for posting this. Commented Jan 9, 2009 at 19:50
  • But there is already one statically-typed Python-like language. Tt's called C# ;-)
    – zuber
    Commented Feb 4, 2009 at 23:08
  • C# is python-like? Maybe you meant Boo ;)
    – Juliet
    Commented Feb 5, 2009 at 3:14
  • 3
    If anyone says dynamic typing is more terse, just point them to Haskell =). I agree with all but your 3rd bullet point. Dynamic code often accepts parameters that can be one of two types. For example, Prototype functions accept either HTMLElements, or strings which you can use $() to look up to get HTMLElements. A good static typing system will allow you to do this =).
    – Claudiu
    Commented May 6, 2009 at 7:16
  • 3
    #2 is only true if you follow #1, which in my opinion is unnecessary. If it's clear what the code does, then it is correct. I have a code I use a lot that reads in data from a tab delimited file, and parses that into an array of floats. Why do I need a different variable for each step of the process? The data(as the variable is called) is still the data in each step. Commented May 8, 2009 at 1:38
76
votes

Programmers who spend all day answering questions on Stackoverflow are probably not doing the work they are being paid to do.

7
  • Is this controversial? I guess no! -1! Commented Sep 9, 2009 at 7:37
  • the latter part is highly controversial
    – Egg
    Commented Sep 21, 2009 at 14:32
  • 6
    I use the excuse: " I am charging my time to Professional Development" on the grounds that I am learning something useful as a developer. Boss agrees.
    – amischiefr
    Commented Oct 26, 2009 at 15:40
  • i'm not geting paid to do anything now.just like hasen j.
    – Behrooz
    Commented Dec 14, 2009 at 18:59
  • 1
    My friend likes to use the excuse: "I'm Compiling"
    – Dave
    Commented Aug 18, 2010 at 8:45
72
votes

Code layout does matter

Maybe specifics of brace position should remain purely religious arguments - but it doesn't mean that all layout styles are equal, or that there are no objective factors at all!

The trouble is that the uber-rule for layout, namely: "be consistent", sound as it is, is used as a crutch by many to never try to see if their default style can be improved on - and that, furthermore, it doesn't even matter.

A few years ago I was studying Speed Reading techniques, and some of the things I learned about how the eye takes in information in "fixations", can most optimally scan pages, and the role of subconsciously picking up context, got me thinking about how this applied to code - and writing code with it in mind especially.

It led me to a style that tended to be columnar in nature, with identifiers logically grouped and aligned where possible (in particular I became strict about having each method argument on its own line). However, rather than long columns of unchanging structure it's actually beneficial to vary the structure in blocks so that you end up with rectangular islands that the eye can take in in a single fixture - even if you don't consciously read every character.

The net result is that, once you get used to it (which typically takes 1-3 days) it becomes pleasing to the eye, easier and faster to comprehend, and is less taxing on the eyes and brain because it's laid out in a way that makes it easier to take in.

Almost without exception, everyone I have asked to try this style (including myself) initially said, "ugh I hate it!", but after a day or two said, "I love it - I'm finding it hard not to go back and rewrite all my old stuff this way!".

I've been hoping to find the time to do more controlled experiments to collect together enough evidence to write a paper on, but as ever have been too busy with other things. However this seemed like a good opportunity to mention it to people interested in controversial techniques :-)

[Edit]

I finally got around to blogging about this (after many years parked in the "meaning to" phase): Part one, Part two, Part three.

10
  • Generally when things are aligned in a columnar way it creates a maintenance burden for a developer. Ie aligning the data type and identifier in a method declaration... Line1(int id,) line 2(char id,) ... making sure the data type, variable name, and even commas all are in a column is a MESS
    – Cervo
    Commented Jan 2, 2009 at 18:55
  • it usually just takes a couple of extra keypresses, if that.I didn't go into too many specifics, but I usually only break it into two columns for alignment purposes (usually type - id). I have some other rules to ease the burden where parantheses are concerned. The biggest obstacle I have [cont...] Commented Jan 2, 2009 at 22:34
  • [...cont] is fighting against auto-formatting editors. In fact, unless it's easy to disable I usually give up in those circumstances and "go with the flow". But with especially verbose languages like C++ I still prefer it. Commented Jan 2, 2009 at 22:36
  • Interesting. I would like to see some examples. Do you have a blog?
    – Jay Bazuzi
    Commented Jan 2, 2009 at 22:37
  • Well, I have: levelofindirection.com (yes, it forwards to blogspot - the pun was intended), and also organic-programming.blogspot.com . However, you'll notice neither have been updated for quite a while - due in large part to vconqr.com ;-) [cont...] Commented Jan 3, 2009 at 16:59
71
votes

Opinion: explicit variable declaration is a great thing.

I'll never understand the "wisdom" of letting the developer waste costly time tracking down runtime errors caused by variable name typos instead of simply letting the compiler/interpreter catch them.

Nobody's ever given me an explanation better than "well it saves time since I don't have to write 'int i;'." Uhhhhh... yeah, sure, but how much time does it take to track down a runtime error?

7
  • 2
    I actually wanted to write this same opinion, as well. IMHO, this is the major drawback of both Python and Ruby, for no good reason at all. Perl at least offers use strict. Commented Jan 2, 2009 at 15:36
  • 2
    Explicit declaration is good, to avoid typos. Assigning types to variables is frequently premature optimization. Commented Jan 2, 2009 at 16:08
  • 5
    Yup. ONE bug hunt involving an l (between k and m) becoming a 1 (between 0 and 2) wasted a lifetime of declaring variables. Commented Jan 3, 2009 at 5:13
  • 1
    Anything else is not a real language. Now THAT'S controversial. Commented Jan 10, 2009 at 23:31
  • 1
    I remember learning Visual Basic 6 in high school. If OPTION EXPLICIT was not the first line in each source file, we would fail.
    – rlbond
    Commented Mar 21, 2009 at 5:00
68
votes

Opinion: Never ever have different code between "debug" and "release" builds

The main reason being that release code almost never gets tested. Better to have the same code running in test as it is in the wild.

15
  • I released something week before last that I'd only tested in debug mode. Unfortunately, while it worked just fine in debug, with no complaints, it failed in release mode. Commented Jan 2, 2009 at 22:30
  • The only thing I differ between Debug/Release builds is the default logging level. Anything else always comes back to bite you.
    – devstuff
    Commented Jan 2, 2009 at 22:45
  • ummm - what about asserts? Do you either not use them, or do you leave them in the release build? Commented Jan 3, 2009 at 0:53
  • 1
    @MarkJ: That's what I'm saying, you should be testing the code that goes out the door, and not have a difference between "Release" that is not tested, and "Debug" that is tested, but never released. Commented Jan 27, 2009 at 13:50
  • 4
    You just need to switch. Our QA uses debugging builds during development but switches to release towards the end. There are certain levels of sanity checking that you would like to be performed as much as possible before shipping, but cannot afford to ship due to performance reasons.
    – nosatalian
    Commented May 31, 2009 at 1:52
64
votes

Opinion: developers should be testing their own code

I've seen too much crap handed off to test only to have it not actually fix the bug in question, incurring communication overhead and fostering irresponsible practices.

5
  • +1. This a matter of ownership, we tend to care better for things we own than the things we don't. Want proof? Take a look at your company vehicles. Commented Jan 3, 2009 at 13:55
  • It also comes with the onus that people reporting bugs can report in sufficient detail so that it can be reproduced and tested to be proven fixed. It sucks to be so maligned when you reproduce a defect according to description, fix it, and find that the tester still has issues you didn't. Commented Jan 7, 2009 at 7:11
  • 1
    I think testing and developing are different skills, they should be done by those who are good at them. Isolating testers from developers and making it hard for testers to get ther bugs fixed: no excuse. Commented Feb 27, 2009 at 19:34
  • 1
    Sounds like bad developers to me. I'd file this under not all lazy developers are good developers.
    – gradbot
    Commented Oct 13, 2009 at 17:25
  • 2
    +1 for controversy: I'm only going to test the things I think to test for, and if I design the particular method... I've already thought of everything that can go wrong (from my point of view). A good tester will see another point of view -> like your users. Commented Oct 14, 2009 at 19:21
62
votes

Respect the Single Responsibility Principle

At first glance you might not think this would be controversial, but in my experience when I mention to another developer that they shouldn't be doing everything in the page load method they often push back ... so for the children please quit building the "do everything" method we see all to often.

7
  • 1
    Agree, but not very controversial?
    – Ed Guiness
    Commented Jan 2, 2009 at 14:06
  • it's controversial because the ugly mess that most people call MVC is mostly a 'do everything'
    – Javier
    Commented Jan 2, 2009 at 14:14
  • Really? I actually thought that MVC was the opposite to that. Commented Jan 2, 2009 at 14:41
  • This answer seems to stir up a bit of controversy on its controversial-ness. ;P
    – strager
    Commented Jan 2, 2009 at 21:51
  • 1
    I Agree RE: MVC - really hard to limit method bloat on the controllers Commented Jan 5, 2009 at 7:39
62
votes

Pagination is never what the user wants

If you start having the discussion about where to do pagination, in the database, in the business logic, on the client, etc. then you are asking the wrong question. If your app is giving back more data than the user needs, figure out a way for the user to narrow down what they need based on real criteria, not arbitrary sized chunks. And if the user really does want all those results, then give them all the results. Who are you helping by giving back 20 at a time? The server? Is that more important than your user?

[EDIT: clarification, based on comments]

As a real world example, let's look at this Stack Overflow question. Let's say I have a controversial programming opinion. Before I post, I'd like to see if there is already an answer that addresses the same opinion, so I can upvote it. The only option I have is to click through every page of answers.

I would prefer one of these options:

  1. Allow me to search through the answers (a way for me to narrow down what I need based on real criteria).

  2. Allow me to see all the answers so I can use my browser's "find" option (give me all the results).

The same applies if I just want to find an answer I previously read, but can't find anymore. I don't know when it was posted or how many votes it has, so the sorting options don't help. And even if I did, I still have to play a guessing game to find the right page of results. The fact that the answers are paginated and I can directly click into one of a dozen pages is no help at all.

--
bmb

17
  • 13
    Google does pagination, Google is very popular.
    – tuinstoel
    Commented Jan 3, 2009 at 22:31
  • 2
    maybe you should give conrete example of a thing that's paginated but shouldn't. for example, how would you "narrow down" answers to this question?
    – hasen
    Commented Jan 4, 2009 at 19:58
  • 5
    @tuinstoel google does a lot of things but is not cooking fish. That google is doing pagination has no consequence in its popularity. Pagination is an antiquated model from books time. It will disappear soon in favor of ajax like refreshes, used by Google Reader for example. Commented Jun 23, 2009 at 9:01
  • 1
    I really, really hate the default 10 results from Google. I turn it up to 100 on every browser I use. I'd probably turn it to 1000 if there were an option (and it still was speedy)
    – nos
    Commented Jul 14, 2009 at 19:55
  • 1
    I came across this answer while paging through and searching every answer to this question to see if anyone had already posted about anonymous functions. Just sayin' Commented Oct 14, 2009 at 18:15
60
votes

Architects that do not code are useless.

That sounds a little harsh, but it's not unreasonable. If you are the "architect" for a system, but do not have some amount of hands-on involvement with the technologies employed then how do you get the respect of the development team? How do you influence direction?

Architects need to do a lot more (meet with stakeholders, negotiate with other teams, evaluate vendors, write documentation, give presentations, etc.) But, if you never see code checked in from by your architect... be wary!

1
  • 1
    Architects that do code are worse than those that don't. i.e. their productivity is negative.
    – finnw
    Commented Jan 17, 2009 at 16:46
60
votes

Source Control: Anything But SourceSafe

Also: Exclusive locking is evil.

I once worked somewhere where they argued that exclusive locks meant that you were guaranteeing that people were not overwriting someone else's changes when you checked in. The problem was that in order to get any work done, if a file was locked devs would just change their local file to writable and merging (or overwriting) the source control with their version when they had the chance.

14
  • 6
    Not controversial. Nobody used SourceSafe by choice. Commented Jan 13, 2009 at 17:13
  • 3
    @MusiGenesis: Yes they do. They exist. Commented Jan 14, 2009 at 9:33
  • 3
    My company is still using SourceSafe. The main reasons are a) General inertia and b) The devs are scared of the idea of working without exclusive locks.
    – T.E.D.
    Commented Jan 14, 2009 at 18:11
  • 2
    My personal feeling is that the ability to merge code files should be a skill all programmers need, like all programmers need to know how to compile their code. It's part of what we do as a byproduct of using source control. Commented Jan 15, 2009 at 0:52
  • 1
    Just to be pedantic...while exclusive locks were the default until recently, SourceSafe has actually supported edit-merge-commit mode since 1998. Commented Jun 12, 2009 at 6:11
60
votes

Objects Should Never Be In An Invalid State

Unfortunately, so many of the ORM framework mandate zero-arg constructors for all entity classes, using setters to populate the member variables. In those cases, it's very difficult to know which setters must be called in order to construct a valid object.

MyClass c = new MyClass(); // Object in invalid state. Doesn't have an ID.
c.setId(12345); // Now object is valid.

In my opinion, it should be impossible for an object to ever find itself in an invalid state, and the class's API should actively enforce its class invariants after every method call.

Constructors and mutator methods should atomically transition an object from one valid state to another. This is much better:

MyClass c = new MyClass(12345); // Object starts out valid. Stays valid.

As the consumer of some library, it's a huuuuuuge pain to keep track of whether all the right setters have been invoked before attempting to use an object, since the documentation usually provides no clues about the class's contract.

12
  • 3
    TOTALLY agree! And I get very frustrated when I see concepts like this become so popular. +1 Commented Jan 22, 2009 at 14:33
  • Invalid States lead to exceptions in my experience. Commented Jan 22, 2009 at 22:25
  • @Cameron, are you saying that you should be able to initialize with a default constructor, then set each property, then setting checking for an invalid state in each setter and throwing an exception? If so, how can you possibly handle a situation where 2 properties need to be in synch to be valid? Commented Jan 23, 2009 at 15:24
  • 1
    That's why I hate ORM frameworks, despite the fact I need them all the time.
    – isekaijin
    Commented Feb 1, 2009 at 6:09
  • I feel your pain Eduardo. I can't stand ORM frameworks, but sometimes they're the least-worst way to solve a particular problem. But yeah, I hate them too.
    – benjismith
    Commented Feb 2, 2009 at 16:46
58
votes

Opinion: Unit tests don't need to be written up front, and sometimes not at all.

Reasoning: Developers suck at testing their own code. We do. That's why we generally have test teams or QA groups.

Most of the time the code we write is to intertwined with other code to be tested separately, so we end up jumping through patterned hoops to provide testability. Not that those patterns are bad, but they can sometimes add unnecessary complexity, all for the sake of unit testing...

... which often doesn't work anyway. To write a comprehensive unit test requires alot of time. Often more time than we're willing to give. And the more comprehensive the test, the more brittle it becomes if the interface of the thing it's testing changes, forcing a rewrite of a test that no longer compiles.

15
  • 3
    Yeah, unit tests up front don't really make sense. If I wrote it down, I thought about the possibility. If I thought about the possibility, unless I'm a complete moron it'll at least work the first time around where the test would apply. Testing needs to catch what I DIDN'T think about! Commented Jan 2, 2009 at 15:00
  • 1
    Phoenix - you have a point about only catching what you didn't think about but I disagree with your overall point. The value of the tests is that they form a spec. Later, when I make a "small change" - the tests tell me I'm still Ok. Commented Jan 2, 2009 at 15:13
  • 5
    Unit tests are also about managing change. It's not the code that you are writing right now that needs the tests, but the code after the next iteration of change that will need it. How can you re-factor code if you have no way to prove that what it did before the change is still what it does after? Commented Jan 6, 2009 at 2:39
  • 5
    Everyone writes the unit test that checks open() fails if the file doesn't exit. No one writes the unit test for what happens if the username is 100characters on a tablet PC with a right-left language and a turkish keyboard. Commented Jan 9, 2009 at 17:42
  • 1
    I think this misses the point of test driven development, which hurts the argument. It isn't about testing edge cases, it is about driving design.
    – Yishai
    Commented Apr 29, 2009 at 18:29
57
votes

All variables/properties should be readonly/final by default.

The reasoning is a bit analogous to the sealed argument for classes, put forward by Jon. One entity in a program should have one job, and one job only. In particular, it makes absolutely no sense for most variables and properties to ever change value. There are basically two exceptions.

  1. Loop variables. But then, I argue that the variable actually doesn't change value at all. Rather, it goes out of scope at the end of the loop and is re-instantiated in the next turn. Therefore, immutability would work nicely with loop variables and everyone who tries to change a loop variable's value by hand should go straight to hell.

  2. Accumulators. For example, imagine the case of summing over the values in an array, or even a list/string that accumulates some information about something else.

    Today, there are better means to accomplish the same goal. Functional languages have higher-order functions, Python has list comprehension and .NET has LINQ. In all these cases, there is no need for a mutable accumulator / result holder.

    Consider the special case of string concatenation. In many environments (.NET, Java), strings are actually immutables. Why then allow an assignment to a string variable at all? Much better to use a builder class (i.e. a StringBuilder) all along.

I realize that most languages today just aren't built to acquiesce in my wish. In my opinion, all these languages are fundamentally flawed for this reason. They would lose nothing of their expressiveness, power, and ease of use if they would be changed to treat all variables as read-only by default and didn't allow any assignment to them after their initialization.

27
  • 1
    Functional languages are just superior that way. Of the non-functional languages, Nemerle seems to be the only one offering this feature. Commented Jan 2, 2009 at 15:39
  • 5
    Disagree but made me think. Interesting.
    – Steve B.
    Commented Jan 2, 2009 at 17:31
  • 2
    @AnthonyWJones: what costs does immutable-by-default have?
    – Juliet
    Commented Jan 2, 2009 at 21:25
  • 5
    @Jeff: I think this is at least debatable. Programming in general has a comprehension cost, any style of programming does. But I doubt that immutable-by-default incurs any additional comprehension cost at all, especially since it's much closer to the mathematical use of variables in equations. Commented Jan 4, 2009 at 21:25
  • 10
    Yes, and we'll only access read-only databases, stored on read-only media. Maybe once our programs have no mutable state, and therefore accomplish nothing we can move on to truly pure functional programming where nothing happens and the compiler with the best optimization outputs nothing. Commented Jan 7, 2009 at 8:46
52
votes

Realizing sometimes good enough is good enough, is a major jump in your value as a programmer.

Note that when I say 'good enough', I mean 'good enough', not it's some crap that happens to work. But then again, when you are under a time crunch, 'some crap that happens to work', may be considered 'good enough'.

48
votes

If I were being controversial, I'd have to suggest that Jon Skeet isn't omnipotent..

6
  • Yes, apparently this is a very controversial view
    – Gareth
    Commented Jan 2, 2009 at 14:28
  • BLASPHE---!! Um, I mean, yes, I quite concur.
    – Mike Hofer
    Commented Jan 2, 2009 at 18:02
  • 3
    I think you might want to bring yourself up to date on the Jon Skeet facts. Remember: "Can Jon Skeet ask a question he cannot answer? Yes. And he can answer it too." He is omnipotent!
    – Vlad Gudim
    Commented Jan 7, 2009 at 13:57
  • 25
    At first I thought you said John Skeet isn't impotent. Commented Jan 11, 2009 at 3:34
  • 2
    @Totophil: Interesting comment when you consider: Jon Skeet asked this question (and he posted an answer...) Commented Feb 18, 2009 at 15:39
46
votes

"Java Sucks" - yeah, I know that opinion is definitely not held by all :)

I have that opinion because the majority of Java applications I've seen are memory hogs, run slowly, horrible user interface and so on.

G-Man

8
  • 2
    I think what you're trying to say is Swing sucks (as in JAVA UIs). Java back ends don't suck at all...unless that's the controversial bit ;)
    – rustyshelf
    Commented Jan 3, 2009 at 4:47
  • You don't have to be a Java partisan to appreciate an application like JEdit. Java has some serious crushing deficiencies, but so does every other language. Those of Java are just easier to recognize.
    – dreftymac
    Commented Jan 3, 2009 at 5:25
  • 9
    I think what you are trying to say is that the barrier for Java coding is so low that there are many sucky Java "programmers" out there writing complete crap. Commented Feb 19, 2009 at 0:44
  • 1
    I agree that most Java desktop apps I've seen suck. But I wouldn't say the same of server apps. Commented Mar 11, 2009 at 8:51
  • 3
    You]re going to blame a programming language for 'horrible user interfaces'? Surely that is a fault of the UI designer. And while I'm sure Java has its share of poorly coded software that runs slowly and consumes too much memory, it is not at all hard to write Java programs that run efficiently and use memory only as needed. Having worked on a Java based web crawler capable of crawling 100s of millions of URIs I can attest to this.
    – Kris
    Commented May 30, 2009 at 22:41
45
votes

Okay, I said I'd give a bit more detail on my "sealed classes" opinion. I guess one way to show the kind of answer I'm interested in is to give one myself :)

Opinion: Classes should be sealed by default in C#

Reasoning:

There's no doubt that inheritance is powerful. However, it has to be somewhat guided. If someone derives from a base class in a way which is completely unexpected, this can break the assumptions in the base implementation. Consider two methods in the base class, where one calls another - if these methods are both virtual, then that implementation detail has to be documented, otherwise someone could quite reasonably override the second method and expect a call to the first one to work. And of course, as soon as the implementation is documented, it can't be changed... so you lose flexibility.

C# took a step in the right direction (relative to Java) by making methods sealed by default. However, I believe a further step - making classes sealed by default - would have been even better. In particular, it's easy to override methods (or not explicitly seal existing virtual methods which you don't override) so that you end up with unexpected behaviour. This wouldn't actually stop you from doing anything you can currently do - it's just changing a default, not changing the available options. It would be a "safer" default though, just like the default access in C# is always "the most private visibility available at that point."

By making people explicitly state that they wanted people to be able to derive from their classes, we'd be encouraging them to think about it a bit more. It would also help me with my laziness problem - while I know I should be sealing almost all of my classes, I rarely actually remember to do so :(

Counter-argument:

I can see an argument that says that a class which has no virtual methods can be derived from relatively safely without the extra inflexibility and documentation usually required. I'm not sure how to counter this one at the moment, other than to say that I believe the harm of accidentally-unsealed classes is greater than that of accidentally-sealed ones.

26
  • 2
    +1 from me. I very rarely have to remove a sealed modifier (and I make everything sealed by default, unless it is immediately clear that it cannot be sealed).
    – user49572
    Commented Jan 2, 2009 at 14:18
  • 3
    i think this is an anti-pattern. Classes without inheritance are just modules. Please don't pretend to know what all future programmers will need to do with your code. Commented Jan 2, 2009 at 18:39
  • 2
    Given your reasoning, it's difficult to disagree. However - if I wished to use your class for a purpose which you didn't intend, but through some clever overriding/application of your base methods/properties it will suit my purpose, isn't that my prerogative rather than yours? Commented Jan 2, 2009 at 22:57
  • 1
    Even so, I should understand the risks in deriving from a non-frozen class. Any changes you make in an unsealed class carry the same penalty, so all you're doing by making everything default-sealed is making it harder to use your code in my own way. Commented Jan 3, 2009 at 21:10
  • 1
    I vastly prefer mocking of interfaces instead of classes anyway, so it's never been an issue for me.
    – Jon Skeet
    Commented Jan 7, 2009 at 14:54
43
votes

Bad Programmers are Language-Agnostic

A really bad programmer can write bad code in almost any language.

1
41
votes

A Clever Programmer Is Dangerous

I have spent more time trying to fix code written by "clever" programmers. I'd rather have a good programmer than an exceptionally smart programmer who wants to prove how clever he is by writing code that only he (or she) can interpret.

5
  • 1
    Real clever programmers are those that find the good answer while making it maintainable. Either that or those who hide their names from comments so users won't backfire asking for changes. Commented Jan 5, 2009 at 13:55
  • 3
    Real genius is seing how really complex things can be solved in a really simple way. People who write needlesly complex code are just assholes who want to feel superior to the world around them. Commented Jan 26, 2009 at 9:56
  • +1 Good programmers know their own limitations - if it's so clever you can only just understand it when you're writing it, well, it's probably wrong now, and you'll never understand it in 6 months time when it needs changing.
    – MarkJ
    Commented Jan 27, 2009 at 11:51
  • 17
    "Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it." --unknown Commented May 5, 2009 at 18:53
  • 3
    Robert, great quote: BTW it's from Brian Kernighan not "unknown"
    – MarkJ
    Commented Jun 1, 2009 at 18:28
40
votes

If you have any idea how to program you are not fit to place a button on a form

Is that controversial enough? ;)

No matter how hard we try, it's almost impossible to have appropriate empathy with 53 year old Doris who has to use our order-entry software. We simply cannot grasp the mental model of what she imagines is going on inside the computer, because we don't need to imagine: we know whats going on, or have a very good idea.

Interaction Design should be done by non-programmers. Of course, this is never actually going to happen. Contradictorily I'm quite glad about that; I like UI design even though deep down I know I'm unsuited to it.

For further info, read the book The Inmates Are Running the Asylum. Be warned, I found this book upsetting and insulting; it's a difficult read if you are a developer that cares about the user's experience.

9
  • Excellent point. I re-learn this point the hard way every time I try to teach my parents (in their early 70s) how to use something on the computer or their cell phones. Commented Jan 13, 2009 at 17:20
  • 4
    I disagree. I don't think they are mutually exclusive. To take the opposite, people who have never used a computer before are the best interface designers. Commented Jan 13, 2009 at 19:47
  • I disagree, but only in the sense that most interface design decisions seem to be made by management.
    – Dave
    Commented Jan 13, 2009 at 23:55
  • 1
    I'd say they're definitely not mutually exclusive. I would more likely say that management should never decide where to put the button. I've had some of the most complicated interfaces ever created that way.
    – Sam Erwin
    Commented Apr 2, 2009 at 18:43
  • 9
    Interaction Design by users is what gave MySpace its reputation for vomit-inducing pages. Commented Jul 16, 2009 at 15:18
40
votes

Avoid indentation.

Use early returns, continues or breaks.

instead of:

if (passed != NULL)
{
   for(x in list)
   {
      if (peter)
      {
          print "peter";
          more code.
          ..
          ..
      }
      else
      {
          print "no peter?!"
      }
   }
}

do:

if (pPassed==NULL)
    return false;

for(x in list)
{
   if (!peter)
   {
       print "no peter?!"
       continue;
   }

   print "peter";
   more code.
   ..
   ..
}
7
  • 2
    I wouldn't apply this as a rule, but I definitely don't hesitate to take this route when it can reduce complexity and improve readability. +1 Why do you need peter so badly, though?
    – P Daddy
    Commented Jan 9, 2009 at 23:19
  • 1
    Not a fan of 'canvern code' are we? :) I have to agree however. I've actually worked on 'cavern code' that more that an ENTIRE PAGE of just closing braces.... And that was on a 1920x1600 monitor (or whatever the exact res is).
    – LarryF
    Commented Jan 14, 2009 at 0:36
  • You should check out "Spartan programming" - this seems like a similar style.
    – Keith
    Commented Mar 9, 2009 at 10:45
  • It is not indentation you are arguing against, its deeply nested conditional and loop blocks. I fully concur in that regard. I've found that enforcing a code style with a maximum line length tends to discourage this behavior somewhat.
    – Kris
    Commented May 30, 2009 at 23:48
  • 2
    I don't like the continue here. Commented Oct 18, 2009 at 4:30
1
2
3 4 5
14

Not the answer you're looking for? Browse other questions tagged or ask your own question.