Why do we write automated tests?

  • Specify the correct behaviour of the system
  • Improve code quality
  • Fewer reported defects
  • Make checking code faster
  • Tell us when we’ve broken something
  • Tell us when our work is done
  • Allow others to check our code
  • Encourage modular design
  • Keep behaviour constant during refactoring

What do we test?

  • The law of diminishing returns applies here
  • Testing everything is infeasible. Don’t be unrealistic.
  • 70% code coverage is actually pretty decent for most codebases.
  • First, test the common stuff.
  • Then test the critical stuff.
  • Next, test the common exception-case stuff.
  • Add other tests as appropriate.

When do we write an automated test?

  • First :)
  • Use a unit test to provide a framework for writing your code.
  • If you find yourself running up an entire application more than once or twice to test a behaviour, wrap it in a unit test and use that test to invoke that behaviour directly.
  • When you receive a bug report, write a test to reproduce the bug.

Some principles for automated tests

  • Test code is first-class code.

    • Tests should be small and simple but treated as just as important as the code that actually performs the task at hand.
  • Each and every test must be able to be run in isolation

    • Tests should the environment up for themselves and clean up afterwards
    • Use your ClassInitialize, ClassCleanup, TestInitialize and TestCleanup attributes if you’re in MSTest-land, and the equivalents for NUnit, XUnit etc.
  • Tests should never rely on being executed in any particular order (that’s part of the meaning of “unit”)
  • Tests should not rely overmuch on their environment

    • Don’t depend on files’ being anywhere in particular on the filesystem. Use dynamically-derived, temporary paths if necessary.
    • Don’t hard-code paths.
  • If a class depends on another class that depends on another class that you can’t easily instantiate in your unit test, this suggests that your classes need refactoring.

    • Writing tests should be easy. If your classes make it hard, fix your classes first.
    • If fixing your class structures is difficult, consider writing a pinning test to enable refactoring, then fixing your class coupling, then removing that pinning test.
  • Tests should be cheap to write.

    • If writing a test is difficult, this suggests that the purpose of the code is unclear. Attempt to clarify the code’s purpose first.
    • Don’t worry about exception-handling.

      • If an unexpected exception is thrown, the test fails. Don’t bother catching it and manually asserting failure.
    • Don’t allow for variations in your output unless you absolutely have to.
    • If there are going to be different outputs, ideally there should be different tests.
  • Tests should be numerous and cheap to maintain.

    • Each test should test one behaviour.
    • It’s much better to have lots of small tests that check individual functionality rather than fewer, complex tests that test many things.
    • When a test breaks, we want to know exactly where the problem is, not just that there’s a problem somewhere in a call stack seven classes deep.
  • Tests should be disposable

    • When the code it tests is gone, the test should be dropped on the floor.
    • If it’s a simple, obvious test, it will be simple and obvious to identify when this should happen.
  • Tests need not be efficient.

    • Efficiency helps but correctness is key.
    • Optimise only when necessary.