Unit Testing in a Managed Environment – Code Review

It’s been a while since my last post on the subject. Last time I talked about having a continuous build system running the tests automatically. I’ll get back to what tools to put on top of that, but this time I want to talk about what’s necessary from the team’s perspective in order to successfully implement unit testing.

Most teams vary in their composition. You won’t get a team comprised of top-performers most of time (except for the Typemock crew, of course). Depending on the size of team you’ll have a few technical leaders, and average developers. If you have less than average developers, get rid of them, they will just slow you down.

I’ll assume that management wants the process to work. It may not always be the case, but that’s for another post. In this case, no one blocks the team, and they are given enough time to learn, adjust and refactor their code. If they are lucky, they can get more tools to help them, like Resharper.

The first practice needed in place is code review, and if you can manage it, employ it’s bigger and better sibling, pair programming.  The best way for people to learn is to discuss what their code does. And if you get an experienced developer with a less experienced one, it becomes as a guidance for the latter.

Reviewing tests is an excellent way to make sure the code is tested correctly.Now, if you review the tests, that means you have tests to review, which means you’re implementing the process. If there are no tests – stop and write them. Experienced developers can tell if the tests are testing too much, if they are too long, and if they make sense at all. And through the code review process, this experience is passed to the less experienced.

Now let’s talk conventions. Does the left brace is above the right brace? It is? Really? Well, you’ve just lost 5 seconds of precious development-time checking that. The team should agree on the conventions that work for them and stick to that. But more important, agree on what’s important and helps the development effort. Like, um.. READABILITY.

Tests should be readable. Let’s start with naming: What do we test? What’s the scenario and what’s the expected result? The test name should say that. When it does, you can look at the test code and corroborate that it actually does it. If not, maybe it’s a sign you need another test. Or maybe you should rewrite the test completely.

How long is the test in lines of code? Three screens long? Too long. Scrolling up and down does not help readability. Are you testing too much? Separate the tests. Are you setting up too much? Time to learn about Isolator. Is it really an integration test? Move it to the integration test suite.

Is your test brittle? Does it know too much about the production code it’s testing?  Time to rethink. Are we drawing the isolation line in the correct place? Does it mimic the production code, line by line? That’s a no-no. You’re not testing what you think is testing. Isolate.

Code review is the second best effective practice I know for unit testing implementation. The best is Pair programming. It instills the conventions inside the production code as you go, which is more effective than intermittent code review. So go for it if you can. We do.