When Should I Write A Test?

Sam Newman asks three questions that can help you decide whether to write tests for a feature.

  1. How easy is the feature to test?
  2. What is the likelihood of the feature breaking?
  3. What is the impact of the break?

Let’s start with the 2nd one. That’s the most important. Because if you don’t have any logic in your code, like wrapper properties, there’s not much point in putting a test around it. Also, if all code A does is call another piece of code B that is surrounded by tests, the likelihood of the coda A breaking is low, and therefore you might decide you don’t need a unit test for it. (An integration test that covers both is another story). In all other cases it makes sense to write a test.

A feature is sometimes hard to test if it requires a whole setup of servers and databases. However, with Isolator, you can, and should, write tests for the different components. If you decide it needs a test (according to the former paragraph), it is easy to unit test with Isolator, and therefore you should write the test. For integration tests it’s just as important: If a system is too complex to test, what do you think the probability is for having bugs there? Exactly. Q1 is irrelevant.

Q3 starts off with good intent. It analyzes the severity of a possible bug, and based on it directs you to either write the test or not. However, it does not take into account that a future code change might affect the current behavior of the feature. We write tests not just to make sure our code works now, but also as a safety net that accompanies the code through its lifetime. If we don’t have a test in place when we are changing our code, we don’t know we’ve broken something.

To counter that let’s take a look a different example: Let’s say I have a web page, with a yellow background. Now, I’m sure it never happened to you, but with every visit from a marketing person, the color changes. Should I have a test in place that checks that the background color is the correct one, then maintain the test with every change? Obviously not. And this is where Q3 comes from. If the impact is just changing the color identifier, I will not write a test.

So when do I really write a test? Where there is logic. It is logic that causes the state to change, and it’s logic that interacts with other components in your system. Where there’s logic there’s a chance for a bug (Q2). And bugs in logic have the biggest cost associated with them (Q3). That’s why there’s no need to test properties, if they have no logic. And where there’s a bug, write a test for it as well.

Integration tests are not as different, but they do cost you more to write, because you can write a whole lot of them, based on the system complexity. So what should you do? We write tests to lower the maintenance cost of our code. If you have a very good suite of unit tests, you can minimize the number of integration tests around the well tested code. This is an analysis driven decision, unlike the unit test question.

What’s your guidelines for writing tests?

  • Paulo Morgado

    "That’s why there’s no need to test properties, if they have no logic."

    This "does not take into account that a future code change might affect the current behavior of the feature".

    Isolator is also very good in testing that, for example, a property is just a wrapper around a field nothing more happens. There are no unknown side efects.

    If, in the future, the property changes to have side efects, the test will fail. You might want to check all code using the property or just change the test.

  • Gil Zilberfeld

    Paulo,

    My thinking is that testing a property behaves as a field is a bit of an over-specification, since you don't really care about the internal implementation. Your tests verify behavior, according to the interface of the object.

    If it just wraps a field, there's no behavior. If there is logic in there, you need the tests. You need to add the tests when they make sense, and beware of YAGNI.

  • ulu

    I usually test stuff whenever I can figure out a requirement for it, kinda BDD stuff.

    For example, suppose I've got a search form and a Presenter for it. The View raises events, the Presenter acts on the View calling its methods. So, should I test that clicking the Search button raises the corresponding event? Yes, because there's a requirement: "clicking the search button initiates the search process". Similarly, I should test that calling the ShowData method displays the data in the SearchDataGrid. The requirement is "the search results are displayed in a grid".

    Clearly there's not much logic in these methods, they're more like "routing" kind. However, they are not that trivial. For example the event raising method adds semantic value to the event. Before that, it's just a button click. The method transforms it into something that has sense: initiating the search. The other method turns some "immaterial" data into "material" rows of strings.

    On the other hand, sometimes (rarely) I get a (private) method during refactoring, which contains some logic but is clearly a piece of implementation details. I don't want to write a test around it, since I've covered this behavior already writing a test for the method that calls this private method. In addition, I still have the freedom to change this method when I need to, without breaking my tests.

TOP