98
votes

I just graduated with a degree in CS and I currently have a job as a Junior .NET Developer (C#, ASP.NET, and web forms). Back when I was still in university, the subject of unit testing did get covered but I never really saw the benefits of it. I understand what it's supposed to do, namely, determine whether or not a block of code is fit for use. However, i've never actually had to write a unit test before, nor did I ever feel the need to.

As I already mentioned, I'm usually developing with ASP.NET web forms, and recently I've been thinking of writing some unit tests. But I've got a few questions about this.

I've read that unit testing often happens through writing "mocks". While I understand this concept, I can't seem to figure out how I'm supposed to write mocks for websites that are completely dynamic, and where almost everything depends on data that comes from a database. For example: I use lot's of repeaters who have ItemDataBound events etc. (again depending on data that is "unknown").

So question number 1: Is writing unit tests for ASP.NET web forms something that is done often, and if it is: how do I resolve the "dynamic environment" issue?

When I'm developing I go through a lot of trial-and-error. That doesn't mean I don't know what I'm doing, but I mean that I usually write some code, hit Ctrl F5 and see what happens. While this approach does the job most of the time, sometimes I get the feeling that I'm being a little clueless (because of my little experience). I sometimes waste a lot of time like this as well.

So, question number 2: Would you guys advise me to start writing unit tests? I think it might help me in the actual implementation, but then again I feel like it might slow me down.

19
  • 1
    I would like to just say the other side of this is some places require them as practice. I am at a place now that we are not allowed to check code in unless it is covered under unit tests. So it is a point to note that even if you may not believe in them, you may be required to do them down the road. I think it is a useful skill to learn and have in your arsenal. I many times will find little bugs and mishaps in my code from doing the tests, almost a little QA session with yourself.
    – Adam
    Commented Jul 24, 2012 at 22:12
  • 9
    How exactly did you cover unit testing without writing any unit tests? Was this one of those "grade yourself" or "design your own curriculum" schools?
    – JeffO
    Commented Jul 24, 2012 at 23:52
  • 4
    Unit tests are questionable--Great for dynamic langauges, less useful the more strict your language is, but DO BUILD TESTS. They don't have to be unit tests, but you really should have integration tests you can run with JUnit, those are very useful.
    – Bill K
    Commented Jul 25, 2012 at 0:47
  • 13
    -1 Sorry, I'm all for good arguments, even if I do not agree, but this is just wrong. No one claims "unit tests are for catching bugs in new code", so that's a straw man - unit tests are for catching regressions. And the Dijkstra quote is taken out of context - the context is that testing is pointless if you have a formal specification of your problem.
    – sleske
    Commented Jul 25, 2012 at 10:27
  • 9
    "unit tests are for catching regressions". No. Automated tests are for catching regressions. Regressions invariably require the same tests to be run many hundreds of times so it is worth the effort to automate them. Unfortunately many of the responses and comments on this question are really dealing with the issue "Are automated tests that helpful?". Unit tests may be a form of automated test but they have a completely different focus. I certainly consider automated tests to be worth their weight in gold, but that shouldn't be used as an argument to justify unit tests (or TDD for that matter).
    – njr101
    Commented Jul 25, 2012 at 15:38

18 Answers 18

124
votes

In my opinion: yes they are, and yes you should.

  1. They give you confidence in the changes you make (everything else is still working). This confidence is what you need to mold the code, otherwise you might be afraid to change things.

  2. They make your code better; most simple mistakes are caught early with the unit tests. Catching bugs early and fixing them is always cheaper than fixing them later, e.g., when the application is in production.

  3. They serve as documentation for other developers on how your code works and how to use it.

The first problem you face is that ASP.NET in itself does not help you to write unit tests—actually it works against you. If you have any choice, start using ASP.NET MVC, which was created with unit testing in mind. If you can't use ASP.NET MVC, you should use the MVP pattern in ASP.NET so at least you can unit test your logic easily.

Besides that you just need to get proficient in writing unit tests. If you practice TDD, your code is created testable—in other words, nice and clean.

I would advise you to practice, and pair program. While reading:

Or, for a first overview:

8
  • 4
    Having just started in unit testing I would have to agree. Not only is it a good habit and very usful combined with a TDD approach, but it minimizes so much time. I don't think my projects could be as useful if I was not able to just run a unit test and verify that everything is working properly even after adding in a new feature or fixing a bug. I can't think of doing regression testing any other way.
    – kwelch
    Commented Jul 24, 2012 at 19:51
  • 21
    If unit-tests are working as a documentation, there is something wrong. Reading 500 lines of code to understand how 5 lines of code work is backwards.
    – Coder
    Commented Jul 24, 2012 at 21:00
  • 4
    @Coder: when you test higher level methods, it does involve lot more than 5 lines of code.
    – user2567
    Commented Jul 24, 2012 at 21:11
  • 7
    @coder: documentation of a class tells you the services that class' instances provide. It tells you less information on how this class' instances are used in the larger context, that is, the interaction between objects. Tests give you typical code of interaction in some cases, which has been precious as a starting point so many times I can't even count them. Commented Jul 24, 2012 at 21:14
  • 21
    @Coder: It's not documenting what the code does, it's documenting the assumptions inherent in that code, i.e. why and when it's supposed to work. If the underlying assumptions are invalidated by a change to the SUT, one or more unit tests should fail. This doesn't replace higher-level design/architecture documentation, or even XML docs, but it covers what those things never can.
    – Aaronaught
    Commented Jul 25, 2012 at 0:50
94
votes

No.

The concept behind unit tests is based on a premise that has been known to be false since before unit testing was ever invented: the idea that tests can prove that your code is correct.

Having lots of tests that all pass proves one thing and one thing only: that you have lots of tests which all pass. It does not prove that what the tests are testing matches the spec. It does not prove that your code is free from errors that you never considered when you wrote the tests. (And the things that you thought to test were the possible issues you were focusing on, so you're likely to have gotten them right anyway!) And last but not least, it does not prove that the tests, which are code themselves, are free from bugs. (Follow that last one to its logical conclusion and you end up with turtles all the way down.)

Djikstra trashed the concept of tests-as-proof-of-correctness way back in 1988, and what he wrote remains just as valid today:

It is now two decades since it was pointed out that program testing may convincingly demonstrate the presence of bugs, but can never demonstrate their absence. After quoting this well-publicized remark devoutly, the software engineer returns to the order of the day and continues to refine his testing strategies, just like the alchemist of yore, who continued to refine his chrysocosmic purifications.

The other problem with unit testing is that it creates tight coupling between your code and the test suite. When you change code, you'd expect some bugs to show up that would break some tests. But if you're changing code because the requirements themselves have changed, you'll get a lot of failing tests, and you'll have to manually go over each one and decide whether or not the test is still valid. (And it's also possible, though less common, that an existing test that should be invalid will still pass because you forgot to change something that needed to be changed.)

Unit testing is just the latest in a long line of development fads that promise to make it easier to write working code without actually being a good programmer. None of them have ever managed to deliver on their promise, and neither does this one. There is simply no shortcut for actually knowing how to write working code.

There are some reports of automated testing being genuinely useful in cases where stability and reliability are of paramount importance. For example, the SQLite database project. But what it takes to achieve their level of reliability is highly uneconomical for most projects: a test-to-actual-SQLite-code ratio of almost 1200:1. Most projects can't afford that, and don't need it anyway.

35
  • 69
    It proves that your code behaves correctly for those tests. It's a freaking lot if you ask me. Commented Jul 24, 2012 at 19:35
  • 111
    Who today actually believes unit tests are "proof-of-correctness"? Nobody should think that. You are correct that they only prove that those tests pass, but that's one more data point than you had before writing the unit tests. You need testing at many different layers to give yourself insight into the quality of your code. Unit tests don't prove your code is free from defects, but they do raise your confidence (or should...) that the code does what you designed it to do, and continues to do tomorrow what it does today. Commented Jul 24, 2012 at 20:09
  • 40
    > but if the test itself is buggy, then you have false confidence and if the manual tester does not perform its job correctly, you also have false confidence. Commented Jul 24, 2012 at 20:31
  • 33
    @MasonWheeler: sorry, Mason, I'm not convinced. In more than a couple decades of programming I don't think I've ever personally seen a case where a test gave a false sense of security. Maybe some of them did and we fixed them, but in no way did the cost of those unit tests outweigh the enormous benefits of having them. If you're able to write code without testing it at the unit level, I'm impressed, but the vast, vast majority of programmers out there are unable to do that consistently. Commented Jul 24, 2012 at 20:32
  • 41
    If you wrote a buggy test that your code passed, your code is likely buggy too. The odds of writing buggy code and a buggy test is a whole lot less than buggy code alone. Unit tests are not a panacea. They are a good extra layer (where prudent) to reduce the burden on manual testers.
    – Telastyn
    Commented Jul 24, 2012 at 20:43
60
votes

If you've ever seen the benefit to writing a main method to test some small piece of code for school, unit testing is the professional/enterprise version of that same practice.

Also imagine the overhead of building the code, starting your local web server, browsing to the page in question, entering the data or setting the input to the proper test seed, submitting the form, and analyzing the results...vs building and hitting the nUnit run button.

Here is a fun image too... enter image description here

I found this image here:
http://www.howtogeek.com/102420/geeks-versus-non-geeks-when-doing-repetitive-tasks-funny-chart/

9
  • 2
    Yes, you can. Isolate the web layer in your unit tests to test web configuration (URLS, input validation, etc). By stubbing the web and database layers you can test the business tier without a database or web server. I use dbunit to test my DB. If you want you can still do full integration testing, but I do that as a specific task after development. Commented Jul 24, 2012 at 21:44
  • 12
    Cute graphic, but it's got the scale badly wrong. As I pointed out in my answer, SQLite's full test coverage requires approximately 1200x more code in tests than actual code in the product itself. And even if you're not that obsessive about full coverage, you still need several times more tests than product to even approach any useful level of coverage in your tests. To be accurate here, the vertical part of that red line would have to keep going up and up and up for about 3 page lengths. Commented Jul 24, 2012 at 22:55
  • 27
    @MasonWheeler That's a very nice horse you're beating, I think it may have died though... SQLite has that many tests because its a goddamn database. I want to make sure that works. As another example, the Silverstripe framework has close to a 1:1 ratio of tests:code, so it's not like SQLite is representative.
    – Aatch
    Commented Jul 25, 2012 at 0:24
  • 15
    @MasonWheeler You're failing at the first of Zeno's paradoxes. If you want to scale up this image to SQLite's level of testing, the X-axis will have to also expand something like 100 times its current length.
    – Izkata
    Commented Jul 25, 2012 at 1:53
  • 2
    This is especially true when you're a backend developer who doesn't have the time to start making a black and white webpage just so that you can test your back end logic. All i have to do supply mock request and session objects and run my controllers through a unit test, and at the end of it all i know if it's right or wrong. If you use DI, simply inject a DAO which interacts with an in memory database so that your database doesn't change or simply throw a Exception and catch it so that your data doesn't get committed. I say YES to unit testing. Commented Jul 25, 2012 at 6:43
48
votes

Unit testing has something of a mystique about it these days. People treat it as if 100% test coverage is a holy grail, and as if unit testing is the One True Way of developing software.

They're missing the point.

Unit testing is not the answer. Testing is.

Now, whenever this discussion comes up, someone (often even me) will trot out Dijkstra's quote: "Program testing can demonstrate the presence of bugs, but never demonstrate their absence." Dijkstra is right: testing is not sufficient to prove that software works as intended. But it is necessary: at some level, it must be possible to demonstrate that software is doing what you want it to.

Many people test by hand. Even staunch TDD enthusiasts will do manual testing, although they sometimes won't admit it. It can't be helped: just before you go into the conference room to demo your software to your client/boss/investors/etc., you'll run through it by hand to make sure it will work. There's nothing wrong with that, and in fact it would be crazy to just expect everything to go smoothly without running through it manually -- that is, testing it -- even if you have 100% unit test coverage and the utmost confidence in your tests.

But manual testing, even though it is necessary for building software, is rarely sufficient. Why? Because manual testing is tedious, and time-consuming, and performed by humans. And humans are notoriously bad at performing tedious and time-consuming tasks: they avoid doing them whenever possible, and they often don't do them well when they're forced to.

Machines, on the other hand, are excellent at performing tedious and time-consuming tasks. That's what computers were invented for, after all.

So testing is crucial, and automated testing is the only sensible way to ensure that your tests are employed consistently. And it is important to test, and re-test, as the software is developed. Another answer here notes the importance of regression testing. Due to the complexity of software systems, frequently seemingly-innocuous changes to one part of the system cause unintended changes (i.e. bugs) in other parts of the system. You cannot discover these unintended changes without some form of testing. And if you want to have reliable data about your tests, you must perform your testing in a systematic way, which means you must have some kind of automated testing system.

What does all this have to do with unit testing? Well, due to their nature, unit tests are run by the machine, not by a human. Therefore, many people are under the false impression that automated testing equals unit testing. But that's not true: unit tests are just extra small automated tests.

Now, what is the value in extra small automated tests? The advantage is that they test components of a software system in isolation, which enables more precise targeting of testing, and aids in debugging. But unit testing does not intrinsically mean higher quality tests. It often leads to higher quality tests, due to it covering software at a finer level of detail. But it is possible to completely test the behavior of only a complete system, and not its composite parts, and still test it thoroughly.

But even with 100% unit test coverage, a system still may not be thoroughly tested. Because individual components may work perfectly in isolation, yet still fail when used together. So unit testing, while highly useful, is not sufficient to ensure that software works as expected. Indeed, many developers supplement unit tests with automated integration tests, automated functional tests, and manual testing.

If you're not seeing value in unit tests, perhaps the best way to start is by using a different kind of automated test. In a web environment, using a browser automation testing tool like Selenium will often provide a big win for a relatively small investment. Once you've dipped your toes in the water, you'll more easily be able to see how helpful automated tests are. And once you have automated tests, unit testing makes a lot more sense, since it provides a faster turnaround than big integration or end-to-end tests, since you can target tests at just the component that you're currently working on.

TL;DR: don't worry about unit testing just yet. Just worry about testing your software first.

1
  • Unit tests are written by humans too, and could be wrong.
    – Seun Osewa
    Commented Aug 5, 2012 at 11:00
41
votes

It Depends

This is an answer that you will see a lot of when it comes to software development, but the usefulness of unit tests really depends upon how well they are written. A nominal number of unit tests that check over the functionality of the application for regression testing can be quite useful; however, a plethora of simple tests that check the returns of functions can be quite useless and provide a false sense of security because the application "has lots of unit tests."

Furthermore, your time as a developer is valuable and time spent writing unit tests is time not spent writing new features. Again, this is not to say you shouldn't write unit tests, but does every single public function need a unit test? I would submit that the answer is no. Does the code that is performing user input validation need unit testing? Quite likely as it is a common failure point in applications.

This is one of those areas where experience comes into play, over time you will recognize the parts of the application that could benefit from unit test, but it will be awhile before you reach that point.

2
  • This is a good point. You have to know how to write good tests which capture the value of the SUT.
    – Andy
    Commented Jul 25, 2012 at 12:54
  • 3
    This basically covers how I was taught to do unit tests -- write tests to cover the most important/breakable situations first, and then add more tests when your assumptions turned out wrong (I thought something could "never fail", but it did). Commented Jul 25, 2012 at 15:57
38
votes

Projects done for university classes differ vastly from business applications you'll write at your work. The difference is lifespan. How long does university project "live"? In most cases, it starts when you write first line of code and ends when you get your mark. You could say that, effectively it only lives for the time of implementation. "Release" is usually equal to its "death".

Software done for business surpasses that release-death point of university project and continues to live as long as business needs it to. Which is very long. When talking about money, nobody will spend a broken penny to have "cooler and neater code". If software wrote in C 25 years ago still works and is good enough (as understood by its business owner needs), expect to be asked to maintain it (adding new features, improving old ones - changing source code).

We come to one very important point - regression. At the place I work we have two teams maintaining two apps; one, wrote around 5-6 years ago with very little code tests coverage*, and second one, newer version of the first application with full-blown tests suite (unit, integration and what not else). Both teams have dedicated manual (human) testers. Want to know how long does it take to introduce a new, fairly basic feature for the first team? 3 to 4 weeks. Half of this time is "checking whether everything else still works". This is busy time for manual testers. Phones are ringing, people get upset, something is broken again. Second team usually deals with such issues in less than 3 days.

I by no means say that unit tests make your code error prone, correct or other fancy words you can come up with. Neither they make bugs magically disappear. But when combined with other methods of automated software testing, they make you applications much more maintainable than they would've been otherwise. This is a huge win.

And last but not least, comment by Brian which I think nails the whole issue:

(...) they do raise your confidence (or should...) that the code does what you designed it to do, and continues to do tomorrow what it does today.

Because between today and tomorrow, somebody might make tiny change which will cause the oh-so-important report generator code to crash. Tests push the odds that you'll find this out before your customer does a little bit to your side.

* They do slowly introduce more and more tests to their code base, but we all know how this stuff usually looks.

1
  • 10
    +1000 for linking to Software_Regression. Colleges expose unit tests as something you must do out of sheer faith, without explaining in detail there is a disease to prevent and control and that that disease is called regression. (then there's the problem with regression tests being something very different from unit tests, for that the only read I found explaining it well is the free sample from this book)
    – ZJR
    Commented Jul 24, 2012 at 23:47
25
votes

is writing unit tests for ASP.NET webforms something that is done often, and if it is: how do i resolve the "dynamic environment" issue ?

It is not often done. Working with UI elements isn't what unit tests are good at, since there's no great way to programmatically verify that the right things end up on the screen at the right places.

Would you guys advise me to start writing unit tests ? I think it might help me in the actual implementation, but then again i feel like it might slow me down.

Where applicable, yes. They can be very helpful in verifying specific cases in a repeatable manner. They help act as 'friction' against troublesome changes, and as a failsafe when you're making better changes.

One thing to note is that they will usually slow you down.

This is okay.

Spending a little time now will (if done well) save you time in the future because they catch some bugs, prevent recurrance of some bugs, and allow you to be somewhat more comfortable doing other improvements in the code to keep it from rotting.

2
  • 8
    +1 for actually answering the question's use case! Everyone else jumped on the "unit test" part and forgot about the "ASP.NET web forms" part...
    – Izkata
    Commented Jul 25, 2012 at 1:55
  • 1
    This is a great answer and deserves more upvotes, the reason being because you've addressed the questions and you've been honest in your appraisal of slowing the developer down and not promising that every bug will be caught or prevented, which is completely and utterly true.
    – Mantorok
    Commented Jul 25, 2012 at 15:55
11
votes

I just graduated with a degree in CS and i currently have a job as a junior .NET developer (in C# and usually ASP.NET, web forms - not ASP.NET MVC). Back when i was still in university, the subject of unit testing did get covered, but i never really saw the benefits of it.

That's because you never programmed big.

I understand what it's supposed to do -determine wether or not a block of code > is fit for use - but i've never actually had to write a unit test before. (nor > did i ever feel the need to ..)

Writing unit tests forces your code to conform to a given mental structure that is testable, well documented, and reliable. It's much more than what you claim.

-I've read that unit testing often happens through writing "mocks". While i understand this concept, i can't seem to figure out how i'm supposed to write mocks for websites that are completely dynamic, and where almost everything depends on data that comes from a database.

mock the database access with a mock class that returns model classes filled with well defined, hardcoded data. or, fill a test database with fixture data and work from there.

-Would you guys advise me to start writing unit tests ?

hell yes. Read Beck's "Test Driven Development" first.

I think it might help me in the actual implementation, but then again i feel like it might slow me down.

It slows you down now. But when you have to debug for hours instead of days, you will change your mind.

any advise will be appreciated :)

Test. Trust me. Unfortunately it must also be a management policy, but it's something that saves time. Tests stay, they are there, now and in the future. When you change the platform, and run the tests, and they still work, you know that your stuff works as expected.

8
votes

Edit: I of course joined the TDD pro/con dogpile and skipped question #1:

1 - Implementing unit tests in ASP.net webforms:

First of all, if you think you can get them to go with MVC, fight for it like a rabid cavebear. As a front end/UI dev, .net MVC is what helped me stop hating .net so I could concentrate better on hating every Java web solution I've ever run into. Unit tests are problematic because webforms really blurs the lines between the server and the client-side. In any attempt to do unit testing I would put the focus on data manipulation and (hopefully) assume webforms handles the normalization of user input for you under the hood.

2 - On whether unit tests are worthwhile in the first place:

Alright, full-disclosure:

  • I am mostly self-taught. My formal training boils down to like one JavaScript, a PHP, and a C# class and my own personal study of OOP principles and reading from stuff like Design Patterns.

However,

  • I mostly write for the client-side web and the actual programming part of that is in one of the most fast and loose languages out there in regards to dynamic typing, first class functions and object mutability.

That means, I don't write for the same compiler or virtual machine. I write for like 4-20 varying interpretations of three languages (yes, two of them merely declarative but also determining the fundamental physical space of the UI I'm working with in different ways sometimes) and have been doing so since the interpretations were a lot more varied than they are today. And no, I'm not just some kid that plugs in JQuery stuff for disposable apps. I help build and maintain fairly sophisticated stuff with a lot of UI element complexity.

So yes, there are lots of opportunities for a few tweaks here and there to create a massive cascade of failures if your design skills are complete crap or you're throwing mediocre devs in large quantities at 1-2 quality dev problems.

My understanding of what TDD is supposed to do for you is that the tests are really more about forcing you to consider design more carefully and keep you focused on requirements. Fair enough, but the problem here is that it subverts what you should be doing, which is designing to an interface, to something subtly but fundamentally different, which is designing for tests of an interface. The difference to me is between drawing a clear picture that mommy won't have to guess the meaning of and filling the whole page in with green really fast so you can be the first kid to slap his crayons on the table and shout "done!" By shifting priority to results over process and design, you're basically encouraging continued implementation of garbage code, which is typically what was at the root of your problems in the first place.

And then of course there is the "not-the-real-point-of" yet often lauded side-benefit of the unit tests themselves helping you detect regression errors. TDD advocates tend to be a little wishy-washy on whether this is actually the goal or just a nifty side-effect, IMO, because they know damn well or suspect at least that this is simply not sufficient to establish the absence of bugs in your code, especially in a more dynamic language like JavaScript where assuming you even can predict every possible scenario in a long chain of dependencies is foolhardy.

There is a place for automated-testing in JS, but a much better use of your time than attaching a unit test to every single 'unit' of your code that comes into contact with another is making sure you don't have a bunch of garbage objects that duplicate work or whose intended usage is semantically ambiguous in there in the first place. You follow the DRY principle. You abstract things out for re-use/portability when the value of doing so becomes apparent (and not a minute before). You establish consistent processes and ways of doing things following more of a carrot than stick principle (i.e. it's too easy to use your stuff the right way to bother wanting to do it the wrong way). And for the love of all things foo and bar, you never indulge in massive cascading inheritance scheme anti-patterns as a means of code reuse.

All of the above have helped me reduce difficult-to-diagnose bugs in my code in a serious way and you can trust that's a big priority for somebody who came up as a developer with a browser set that had nothing better to tell you than "uh there's been a problem with an object of type 'Object' at this imaginary line number in an unspecified file." (gee, thanks IE6) TDD, in my line of work, would not encourage these things. It would shift focus to 100% results over process where what's between point A and B doesn't really matter as long as it works. It is a waste of time that would be better applied towards making sure your stuff is legible, portable, and easy to modify without a lot of confusing breakage in the first place.

Or maybe I'm just being overly curmudgeonly about the paradigm I'm rooted in, but in my opinion, doing it right in the first place is a much more effective use of time than covering your butt for when you or everybody else on your team does it wrong. And nothing should force you to consider design over just getting things implemented. Design should be every programmer's freaking altar. And anything that "forces you" to do the right thing or protects you from yourself should be viewed with the same suspicion reserved for bottles of snake oil, IMO. Snake oil in modern IT and general development, if you're not yet aware, is sold by the liquid ton.

2
  • 1
    I write a lot of unit tests. I won't write code without them. But I agree with your sentiments. The tests should guide, not drive development. You can't automate thinking carefully about a problem.
    – FizzyTea
    Commented Jul 26, 2012 at 19:46
  • I have no problem with automated testing. But wholesale adoptions of it at every juncture seem more like panic than process to me. Saving that time for design and automating testing and validation at points where you're likely to interact with a variety of things you don't control is how I prefer it. Commented Jul 26, 2012 at 20:16
5
votes

In my experience, unit tests are indeed very useful, when you start with them and stay with them i.e. Test-Driven Development. Here's why:

  • Unit tests force you to think about what you want, and how to verify that you got it, before you write the code that does it. In a TDD scenario, you write the test first. You thus have to know what the code you're about to write needs to do, and how you can verify that it successfully did so, in order to write a "red" test that you then make "green" by writing the code to pass it. In many cases this forces you to think just a little more about the algorithm you are about to write to pass this test, which is always a good thing as it reduces logic errors and "corner cases". While you're writing the test, you're thinking "how could this fail" and "what am I not testing here", which leads to a more robust algorithm.

  • Unit tests force you to think about how the code you are about to write will be consumed. Before I learned TDD, there were MANY times where I wrote code expecting a dependency to work one way, and then was given a dependency written by a colleague that worked a completely different way. While this is still possible with TDD, the unit test you're writing forces you to think about how you want to use the object you're writing, because it is an example usage of said object. You will then hopefully write the object in such a way that it's easy to consume, and thus to adapt if necessary (though programming to predefined interfaces is a better overall solution to this problem which doesn't require TDD).

  • Unit tests allow you to "code by testing". A refactoring tool like ReSharper can be your best friend if you let it. You can use it to define the skeleton of new functionality as you are defining the usage in a test.

    For instance, say you need to create a new object MyClass. You start by creating an instance of MyClass. "But MyClass doesn't exist!", ReSharper complains. "Then create it" you say, with a press of Alt+Enter. And presto you have your class definition. In the next line of test code you call a method MyMethod. "But that doesn't exist!", says ReSharper. "Then create it", you repeat, with another Alt+Enter. You have, with a few key presses, defined the "skeleton" of your new code. As you continue to flesh out the usage, the IDE will tell you when something doesn't fit, and usually the solution is simple enough that the IDE or a tool that plugs into it knows how to fix it.

    More extreme examples conform to the "Triple-A" model; "Arrange, Act, Assert". Set everything up, perform the actual logic you are testing, then assert that the logic is correct. Coding in this way, it is very natural to code the solution into the test; then, with a few keypresses, you can extract that logic and put it somewhere where it can be used in production code, then make small changes to the test that point it at the new location of the logic. Doing it this way forces you to architect the code in a modular, easy-to-reuse way because the units you are testing must still be accessible.

  • Unit tests run many orders of magnitude faster than any manual test you could think of. There is a point, very quickly met in the case of larger projects, where the time overhead of unit testing begins to pay for itself by reducing the time spent manually testing. You must ALWAYS run the code you just wrote. Traditionally, you did so manually, by starting up a large program, navigating through the UI to set up the situation you have changed the behavior for, and then verifying the results again through the UI, or in the form of data produced. If the code were TDDed, you just run the unit tests. I guarantee you if you are writing good unit tests, the latter option will be many times faster than the former.

  • Unit tests catch regression. Again, there have been many times before I learned TDD where I went in, made what I thought was a surgical change to a piece of code, verified that the change fixed an errant behavior (or produced a new desired behavior) in the reported situation, and checked it in for release, only to find that the change broke some other corner case in a completely different module that happened to reuse that same code block. In a TDDed project, I can write a test verifying a change I'm about to make, make the change, then run the full suite, and if all other usages of the code were TDDed and my change broke something, the tests for those other things will fail, and I can investigate.

    If developers had to manually find and test all the lines of code that could be affected by a potential change, nothing would ever get done because the cost of determining the impact of a change would make making the change infeasible. At the very least, nothing would be SOLID and thus easily maintained, because you would never dare touch something that may be used in multiple places; you would instead roll your own very-similar-but-incorporating-your-minor-change solution for this one case, violating SRP and probably OCP, and slowly turning your codebase into a patchwork quilt.

  • Unit tests shape architecture in a typically-advantageous way. Unit tests are tests performed in isolation from any other logic. In order to be able to unit test your code, then, you must write the code in such a way that you can isolate it. Good design decisions such as loose coupling and dependency injection thus naturally shake out of the TDD process; dependencies must be injected so that in a test you can inject a "mock" or "stub" that produces the input or handles output for the situation being tested without creating "side effects". TDD's "usage first" mentality generally leads to "coding to interfaces", the essence of loose coupling. These good design principles then allow you to make changes in production such as replacing an entire class, without requiring large masses of the codebase to be changed to adapt.

  • Unit tests show that the code works, instead of proving the code works. At first glance you might think that a disadvantage; surely proof is better than demonstration? Theory's great; computational and algorithmic theory is the foundation of our work. But, a mathematical proof of correctness of an algorithm is not proof of correctness of the implementation. A mathematical proof only shows that code adhering to the algorithm should be correct. A unit test shows that the code actually written does what you thought it would do, and is thus evidence it is correct. This is generally of much more value all around than a theoretical proof.

Now, all that said, there are disadvantages to unit testing:

  • You can't unit-test everything. You can architect your system to minimize the number of LOC not covered by a unit test, but there are quite simply going to be certain areas of the system that cannot be unit-tested. Your data-access layer can be mocked when used by other code, but the data-access layer itself contains a lot of side effects and it is typically not feasible to unit test much (most?) of a Repository or DAO. Similarly, code that uses files, sets up network connections, etc have side-effects built-in and you simply cannot unit-test the line of code that does that. UI elements often cannot be unit-tested; you can test codebehind methods such as event handlers, you can unit test constructors and verify that handlers are plugged in, but there simply is no in-code substitute for a user clicking the mouse on a particular graphical element and watching the handler be called. Reaching these boundaries between what can and cannot be adequately unit-tested is called "scraping the edge of the sandbox"; beyond that point you are limited to using integration tests, automated acceptance tests, and manual testing to verify behavior.

  • Many advantages of unit testing do not apply without TDD. It is perfectly possible to write code, then write tests that exercise the code. They're still "unit tests", however by writing code first, you lose many of the advantages inherent in "test-first" development: code isn't necessarily architected in a way that can be easily tested, or even used in production; you don't get the "double-check" thought process inherent in writing the test and thinking about what you hadn't been thinking about; you don't code by testing; and if you write code, manually test it and see it work, then code a unit test that fails, which is wrong, the code or the test? Your main advantages are regression prevention (you'll be alerted when code that previously passed its tests now fails) and high-speed verification versus manual testing. The loss of TDD's other advantages may tip the balance away from using unit tests at all.

  • Unit testing introduces an overhead. Quite simply, you're writing code to test the code you're writing. This will necessarily increase the total LOC of a development project, and yes, the LOC for tests can exceed the LOC for the actual project. Naive developers and non-developers will look at this state of affairs and say the tests are a waste of time.

  • Unit testing requires discipline. You have to write tests that will adequately exercise the codebase (good code coverage), you have to run them regularly (as in whenever you commit a change, the full suite should be run), and you have to keep everything "green" (all tests passing). When things break, you must fix them either by fixing code that doesn't meet expectations, or by updating the expectations of the tests. If you change the tests, you should be asking "why", and keeping a very close watch on yourself; it's incredibly tempting to simply change failing assertions to match current behavior, or simply remove failing tests; but, those tests should be based on requirements, and when the two don't match you have a problem. If these things aren't done, the tests have no value once you use each one to show the code you are actually writing works as of when you last touched the test (which was to prove your initial dev or surgical change did what you thought when you wrote it).

  • Unit testing requires more equipment. The usual solution to the above need for discipline, and a natural tendency of humans to get lazy and compacent, is a "build-bot" running a "Continuous Integration" software package like TeamCity, CruiseControl, etc, which performs unit tests, calculates code coverage metrics, and has other controls such as "triple-C" (coding convention compliance, a la FxCop). The hardware for the build-bot must be reasonably performant (otherwise it won't keep up with the rate of code check-ins the average team will make), and check-in procedures on the bot must be kept up to date (if a new library of unit tests is created, build scripts that run unit tests must be changed to look in that library). This is less work than it sounds, but typically requires some technical expertise on the part of at least a few people on the team who know how the nitty gritty of various build processes work (and thus can automate them in scripts and maintain said scripts). It also still requires discipline in the form of paying attention when the build "breaks", and fixing whatever caused the build to break (correctly) before checking in anything new.

  • Unit testing can force code to be architected in a non-ideal way. While TDD is typically good for the modularity and reusability of code, it can be detrimental to the proper accessibility of code. Objects and members, placed in production libraries, can't be private or internal if they are directly used by unit tests. This can cause problems when other coders, now seeing an object, try to use it when they should instead use something else present in the library. Code reviews can help with this, but it can be a concern.

  • Unit testing generally prohibits "rapid application development" coding styles. If you're writing unit tests, you're not coding "fast-and-loose". That's typically a good thing, but when you're under a deadline imposed from outside your control, or you're implementing a very small-scoped change and the stakeholder (or your boss) is wondering why all this cruft has to happen just to change one line of code, it can simply be infeasible to write and maintain the proper unit test suite. Agile processes typically help with this by allowing the developers a say in the time requirements; remember, all the salesman has to do is say "yes we can" and they get the commission check, unless the process involves the people who have to actually do the work saying "no we can't" and being paid attention to. But, not everyone's Agile, and Agile has its own limitations.

4
votes

Unit tests are useful when used correctly; there are various problems with your logic.

"When I'm developing I go through a lot of trial-and-error" Said Kevin.

Look at programming by coincidence. It's better to understand what a code should do and then prove that it does it via a unit test. Do not assume a code works simply because the UI didn't break when you ran the program!

s writing unit tests for ASP.NET web forms something that is done often

I have no knowledge of statistical data to say how often people test web forms. It does not matter. The UI is hard to test and Unit Tests should not be coupled to a UI either. Separate out your logic into layers, class libraries which can be tested. Test the UI separately from the back-end logic.

So, question number 2: Would you guys advise me to start writing unit tests?

Yes. Once you get used to them, they speed you up. Maintenance is far more time consuming and costly than the initial development phase. Maintenance is greatly assisted with unit tests, even with initial testing and bug repair.

1
  • There are test frameworks out there that can do tests on aspx pages to simply test if the pages loads successfully or not in different scenarios. This might not be good enough, but it's better than nothing.
    – awe
    Commented Jul 25, 2012 at 12:44
4
votes

On a practical level Unit tests can be extremely useful for one very important reason: You can test one bit of a code at a time.

When you are writing a whole sequence of complex steps and debug them at the end, you tend to find many bugs, and the process is generally harder because you are further along in the process. In Visual Studio you can generate a quick test, change it a little and run ONLY that test.. You then know that for instance a method calling that method can rely on it. So it increases confidence.

Unit tests dont prove that your program is correct!: Unit tests check for regressions in sensitive code and allow you to test new methods you write.

Unit tests are not meant to test front-ends: Think of a unit test as that small project you create to test a new method you are writing or something, not as some sort of condition that your code needs to meet.

Unit testing works great for collaborative environments: If I have to supply a method to a colleague that is using a completely different technology to mine, like for instance he is calling it from iOS, then I can quickly write a unit test to see if the method he wants a) Retruns the correct data b) Performs according to his specifications c) Doesn't have any nasty bottlenecks.

2
  • "Unit tests are not meant to test front-ends" Who says that? I'm not questioning it. I'd just like to know because a lot of people seem to have the wrong idea and I'd like to whomp them with the that's-not-what-this-hugely-influential-TDD-advocate-source-says stick. Commented Jul 26, 2012 at 20:09
  • Some people, myself included, had that misconception when first encountering unit tests, so I am just clearing that up. I don't know why you feel like I am challenging you, in fact I fully agree with you.
    – Tjaart
    Commented Jul 27, 2012 at 5:58
4
votes

One thing that doesn't seem to be touched on is the complexity of testing different types of code.

When you are dealing with stuff with simple inputs and outputs it's generally easy to unit test and if the tests are chosen with a mind to the edge cases of the code being tested they'll give you a lot of confidence that they are right. Not writing tests for such code would be crazy. (Note that most code that goes into libraries meets this criteria.)

However, in other cases you end up having to construct huge mocks to test because the code is playing with complex underlying data (usually a database but it doesn't have to be) or produces very complex output data (think of a routine that turns a seed value into a game level for a pretty extreme example) and unit testing isn't nearly so useful. Covering everything in your test cases usually is a pipe dream and when you do find a bug it's almost always a case you never thought of and therefore couldn't have constructed a test for anyway.

There are no silver bullets, anything portrayed as a silver bullet isn't nearly as good as it's proponents claim but that doesn't mean it's useless.

4
votes

Writing unit tests for ASP.NET Web forms application is NOT common and very hard to accomplish. However, it does not mean that the project should skip it.

Unit tests are corner stones of mission critical applications and they provide reliability and peace of mind that core functionality performs as expected.

Actually you may introduce unit testing much easier with a ASP.NET MVP pattern that could be used in your web forms development. It will introduce separation of concerns and ability to write essential unit tests.

Some references that might be helpful to look are:

1
  • 2
    +1 It doesn't matter if you're doing unit tests or not, MVP is the way to go when working with Web Forms (maybe WinForms too).
    – simoraman
    Commented Jul 25, 2012 at 4:35
3
votes

The purpose of unit tests is to allow you to safely change your code (refactoring, improving etc.) without the fear that you can break something inside the system. This proves very useful for large applications, when you do not really know all the effects of your code change (despite loose coupling).

3
  • 6
    Without fear is a false sense of security.
    – Coder
    Commented Jul 24, 2012 at 20:47
  • Maybe "less fear" is a better phrase, but the answer is still largely correct. Commented Jul 25, 2012 at 16:07
  • 1
    @Coder If your tests pass after the refactoring, then you can be pretty sure that you improvement in code did not change the way the application behaves (roughly speaking you did not introduce any new bugs, but this is not necessarily true because you cannot achieve 100% test coverage).
    – Random42
    Commented Jul 25, 2012 at 16:37
2
votes

It is my experience that yes, unit tests are useful.

Unit Testing is about segregating portions of the code you write and ensuring that they work in isolation. This isolation may indeed involve creating fake or mock objects to talk to the object you're testing, or it may not. It depends very much on the architecture of your object.

Unit testing grants various benefits:

Primarily, this ensures that the units you test work the way you, the developer, think they should work. While this is less than it sounds: it doesn't necessarily say that the object is working correctly; only that it's working how you think it should work, it is a far stronger statement than the trial-and-error method that is necessary without unit testing.

Additionally, it ensures that units continue to work the way you, the developer, thought they should work. Software changes, and this may impact units in unexpected ways.

Another side-effect is that it means that the units you test do work in isolation. This generally means that you begin programming to interfaces rather than concrete classes, reducing coupling and increasing cohesion: all common hallmarks of good design. Yes: merely enabling your units of code to have unit tests will naturally improve the design of your code.

However...

Unit Testing is not the end of testing. Other types of testing are still required. For example, even when you've shown that units work the way you expect them to work, you still need to show that each unit works the way the units that talk to it expect it to work. That is, do your components integrate with each other correctly?

1
vote

For simple apps with small numbers of layers to test (Main window -> sub menu), perhaps compiling to run to check changes is ok.

For larger apps where it takes 30s navigation to get ot the page you're testing, this ends up being very expensive..

1
vote

I want to add a new side (actually, a rather old one) to this: unit tests aren't that useful if your code is well-designed.

I know most of the programmers don't do program design anymore, I still do, and I found unit tests to be a rather time-consuming necessity to fit into a TDD culture, but so far, I have met only 1-2 minor bugs per thousands of lines of tests of my code - not just what I wrote, but what official testers wrote as well.

I guess you won't do program design anymore either - the current folks will pressure you into a mold not to do it - but perhaps it's worth to remember that unit test is a very-very low efficiency method compared to design.

This is a dijsktraian argument: unit test can only be executed against a program which is detailed enough to run.

If you draw flowcharts / state/action diagrams, sequence diagrams, object inventories (and last AND least, class diagrams) for your code before actually writing it, you'll be able to kill off most of the potentially threatening bugs just by tracing around your lines and checking your names.

Nowadays class diagrams are generated from code, containing hundreds, sometimes thousands of classes and are completely unusable - anything containing more than 15 elements is beyond human recognition.

You have to know what 10+-5 classes matter or what you do right now, and be able to check your code from multiple viewpoints, each diagram representing EXACTLY the viewpoints you're looking at, and you'll kill off thousands of bugs on paper.

  • Checking in a dynamic code if types are met (with just simply showing input/output types on a flowchart and connecting them with a pencil or different color),
  • checking if all states are handled (by checking the fullness of your conditions),
  • tracing the lines to make sure everything gets an end (every flowchart should be a fully-defined deterministic finite state automat, for the mathematically minded
  • ensuring that certain components have nothing to do with each other (by extracting all their dependencies) (good for security)
  • simplifying code by converting dependencies into associations (dependencies come from what classes do the member methods use)
  • checking names, looking for common prefixes, erasing synonyms etc...

there's just so much easier...

Also, I found out that my applications are more useable, if they come directly from use cases... if the use cases are written well (ACTOR verb subject, Maintainer requests to open cash machine)... That

I've did code with 95+% of code coverages. Of course, I do unit tests sometimes, esp. for boundary checks in calculations, but I'm yet to meet serious regressions (serious: doesn't get wiped out in 24 hours) for not using unit tests even for refactorings.

Sometimes I don't do a single line of code for 3-4 days straight, just drawing. Then, in one day, I type in 1500-2000 lines. By the next day, they're mostly production ready. Sometimes unit tests are written (with 80+% of coverage), sometimes (in addition) testers are asked to try to break it, every single time, some people are asked to review it by looking at it.

I'm yet to see those unit tests finding anything.

I wish design-thinking would come in place of TDD... but TDD is so much easier, it's like going with a sledgehammer... designing needs thinking, and you're away for the keyboard most of the time.

2
  • I think you can design at the keyboard. But planning and organizing code does require a great deal more thought than simply stumbling through input to output in classes that might as well be monolithic functions. Commented Jul 26, 2012 at 19:49
  • Trust me, monolithic functions don't fit an A4 (or US Letter) paper. Sometimes when I'm lazy I do "design" at the keyboard, but that's not the same quality. Whenever I do serious development, I do design with UML. When you're drawing these, you're trying to explain to yourself and to others in a much restricted but still structured way what's happening, and when you're able to explain the code from every single perspective, then, only then you type it in, and suddenly all your bugs are at most typing errors...
    – Aadaam
    Commented Jul 26, 2012 at 20:12

Not the answer you're looking for? Browse other questions tagged or ask your own question.