6

http://butunclebob.com/ArticleS.UncleBob.TheThreeRulesOfTdd

You are not allowed to write any production code unless it is to make a failing unit test pass.

So where do asserts fit into this? If you followed this rule you'd never write an assert without a unit test. But how would you test for the existence of an assert? The very nature of assert seems to make this idea impossible as the test would explode.

And what about code meant to handle absurd conditions that are difficult or impossible to replicate? Some sort of FUBAR from a system call for example... You need to handle the error but the condition that causes the error is pretty rare and comes from the OS. How do you write a unit test to make that fail?

Or say you're writing a template library and want to do some concept checks before going too deep into template vomit land? I don't know of any framework that supports compile time failure as a test PASS.

I've tried to live by some of these TDD edicts, but at some point I've always run into walls I can't get around. These are some.

I guess I should say that I'm talking about assert in the C/C++ sense in which a program, built without NDEBUG, will fault out if the assertion fails. Not some exception throwing mechanism that's called assert. Didn't even realize someone would use that name for throwing exceptions.

3
  • 1
    Interesting edit. To provide an alternate perspective, in Delphi "assert" works a lot like it does in C, except that if it fails, it raises an exception instead of killing the program. You aren't likely to have any exception handlers to catch that exception class, but the global exception handler will get it. If you have an error reporting library hooked in, it will report on that, (without needing a separate error reporting mechanism for assertion failures,) and it allows the program to continue executing; you can still use any other functionality, just not this thing that bugged out on you. Commented May 20, 2012 at 18:47
  • 1
    The software under test would explode. But if this is the expected result of the test itself, the test would pass.
    – mouviciel
    Commented May 22, 2012 at 7:47
  • 1
    In some popular programming languages, an assert that fails will just raise an exception. Languages that don't do this are harder to work with, for the sorts of reasons you describe in your question. Commented Jul 1, 2016 at 16:42

4 Answers 4

14

It looks to me like you're experiencing cognitive dissonance, trying to believe two contradictory ideas and accept both as valid. The way to resolve it is to understand that one (or possibly both) must be incorrect, and find out which it is. In this case, the problem is that those edicts are based on a false premise, which Uncle Bob repeats several times a few lines further down:

However, think about what would happen if you walked in a room full of people working this way. Pick any random person at any random time. A minute ago, all their code worked.

Let me repeat that: A minute ago all their code worked! And it doesn't matter who you pick, and it doesn't matter when you pick. A minute ago all their code worked!

That's the shining promise of TDD: test everything, make it so all your tests pass, and all your code will work.

Problem is, that's a blatant falsehood.

Test everything, make it so all your tests pass, and all your tests will pass, nothing more, nothing less. That doesn't mean anything particularly useful; it only means that none of the error conditions that you thought to test for exist in the codebase. (But if you thought to test for them, then you were paying enough attention to that possibility to write the code carefully enough to get it right in the first place, so that's less helpful than it might be.)

It doesn't mean that any error you didn't think of is not present in the codebase. It also doesn't mean that your tests--which are also code written by you--are bug-free. (Take that concept to its logical conclusion and you end up caught in infinite recursion. It's tests all the way down.)

To give an example, there's an open-source scripting library that I use whose author boasts of over 90% unit test coverage and 100% coverage in all core functionality. But the issue tracker is almost up to 300 bugs now and they keep coming. I think I found five from the first few days of using it in real-world tasks. (To his credit, the author got them fixed very quickly, and it's a good-quality library overall. But that doesn't change the fact that his "100%" unit tests didn't find these issues, which showed up almost immediately under actual usage.)

The other major problem is that as you go on,

every hour you are producing several tests. Every day dozens of tests. Every month hundreds of tests. Over the course of a year you will write thousands of tests.

...and then your requirements change. You have to implement a new feature, or change an existing one, and then 10% of your unit tests break, and you need to manually go over all of them to discern which ones are broken because you made a mistake, and which are broken because the tests themselves are no longer testing for correct behavior. And 10% of thousands of tests is a lot of unnecessary extra work. (Especially if you're doing it 1 test at a time, as the three edicts demand!)

When you think of it, this makes unit testing a lot like global variables, or several other bad design "patterns": it may seem to be helpful and save you some time and effort, but you don't notice the disastrous costs until your project becomes big enough that their overall effect is disastrous, and by that time it's too late.

It is now two decades since it was pointed out that program testing may convincingly demonstrate the presence of bugs, but can never demonstrate their absence. After quoting this well-publicized remark devoutly, the software engineer returns to the order of the day and continues to refine his testing strategies, just like the alchemist of yore, who continued to refine his chrysocosmic purifications.

-- Edsger W. Djikstra. (Written in 1988, so it's now closer to 4.5 decades.)

7
  • 9
    I would up-vote this answer 1000 times if I could, the TDD dogma is just that; great for selling books and conference seats and triple digit consulting hours, what you need is more pragmatism and less dogma to things like this!
    – user7519
    Commented May 18, 2012 at 23:54
  • 16
    If 10% of your tests break from one change, you're doing it wrong. Either you have classes too tightly coupled, or large methods under test, or too many tests for one piece of functionality. Or something similar. SOMETHING is wrong. It should be, at most, a handful of tests and you should have identified those and broken them before making the change to fix them. What you're describing is test-last, not test-driven. And your link is to a podcast on which Joel completely butchers one of the SOLID principles before denouncing them all. Not a great source.
    – pdr
    Commented May 19, 2012 at 0:49
  • 1
    You are absolutely right that no unit testing (including producing a test for every little piece of code) can detect bugs nobody thought of testing for. I also tend to agree that there's some unjustified hype. But I'm weary of the latter part of your answer. I can't comment on long-runners as I'm yet to work on one, but in my smallish (currently 8000 lines of Python, with heavy metaprogramming so it's like a few ten thousand lines of C/C++/Java) 6-month-old project, experience has been: Changes break a couple of unit tests, but rarely unrelated ones, an more frequently detect errors.
    – user7043
    Commented May 19, 2012 at 9:09
  • 1
    Tests are only as good as the person writing them. But that's no different than any normal code. Changes to expected outcomes impact tests? Sure. How about changes to code that shouldn't impact the outcome? Case in point, I am on a team in the early stages of a large project. On at least 3 occasions, I have completely rewritten implementations of code, all for different reasons. The inputs to the code were unchanged. The expected outputs were unchanged. The internals changed completely. And it was the 100+ tests I had surrounding these implementations that gave me full confidence... Commented May 19, 2012 at 15:03
  • 13
    I think maybe you're misunderstanding how the TDD thing works. The first half of your post seems reasonable, but the later half, when you say, "...and then 10% of your tests fail...," because of changing requirements doesn't seem to mesh with what TDD is supposed to be. In TDD your tests wouldn't fail for unknown reasons, you wouldn't need to track down which unit tests fail and why...because the reason they failed is that you changed them to reflect the new requirement before writing the code that targets that requirement. Refactoring to open/closed doesn't seem expressed here either. Commented May 19, 2012 at 22:25
7

So where do asserts fit into this?

Test Your Code's Behavior

If you need to test that an assert fires than do so. Pass in the variable that makes the assert throw and exception and in your test catch the exception to assert it was thrown.

And what about code meant to handle absurd conditions that are difficult or impossible to replicate?

You have a few choices.

Don't Test

You have the option of not testing. Testing provides a measure of risk mitigation. You can choose not to test, but realize you increase you risk.

Use Mocks to Facilitate Replication

You can almost always mock out some underlying call to throw the appropriate exception or error condition which can then be tested against.

Indirect Testing

Usually you want to test a condition directly. Sometimes it's too difficult or expensive. In some cases you can mitigate some of the risk by testing for the conditions indirectly. This would include testing for initial states, side effects, or other indicators of the condition.

Exceptions to the Rule

Some cases will always be very difficult to test directly. Testing for a deadlock, cases of resource starvation, threading issues can sometimes be to difficult.

4

Not sure why you say that "the test would explode". In this example I am testing an assertion:

public int methodUnderTest() {
    int i = resultOfACalculationThatShouldNeverBeZero();
    assert (i != 0);
    return 5 / i;
}

private int resultOfACalculationThatShouldNeverBeZero() {
    return 0;
}

@Test(expected = AssertionError.class)
public void shouldAssertBeforeDividingByZero() {
    methodUnderTest();
}

In a real world scenario the resultOfACalculationThatShouldNeverBeZero would have to be mocked somehow.

Having said that, I don't think assertions (at least in Java) are too useful. It's more like a crutch when debugging, to validate your own assumptions, but probably indicates an overly complex method. One point of the assertion is to document the intent. A unit test does that as well.

Having said all this, I would take what uncle Bob says with a big grain of salt. I have read Clean Code and Agile Software Development and those books contain lots of very very bad code (breaking almost every one of his own rules) and examples where he actually makes the "fixed" code worse than the original..

4
  • 4
    +1 for the last paragraph if nothing else!
    – user7519
    Commented May 19, 2012 at 4:13
  • 2
    An assertion is a way of documenting an assumption at a particular point in the code. Nothing more, nothing less. Commented May 20, 2012 at 18:28
  • 1
    That is correct and my point is that unit tests also document assumptions. :) Commented May 21, 2012 at 21:52
  • 1
    @JarrodRoberson The problem with Uncle Bob is that he talks too much about code without really thinking about it from a regular-case perspective. More than often, I end up reading something from him and thinking "Well, I can't work like that". He has some nice advice, but he also has some really nasty trash. A programmer is better off reading some really spec'ed book, like C# in Depth, from Jonh Skeet.
    – T. Sar
    Commented Jul 4, 2016 at 13:51
1

I use tests and asserts for two different purposes.

I use tests primarily to communicate the desired behavior of my program, and secondarily to validate said behavior. I use asserts primarily to communicate assumptions with potential pitfalls, and secondarily to validate said assumptions.

Neither are silver bullets, and in my experience both function better when viewed as a tool for communicating intent to other programmers, as I feel this is what they're good at. As others have said, don't get caught up in the dogma. Figure out what value the tools offer you and your team, and add them to your process as necessary.

Not the answer you're looking for? Browse other questions tagged or ask your own question.