Placeholder image

Cody Potter - Posted on March 26, 2024

Your Unit Tests are Thirsty: Embrace DAMP testing

Have you ever stumbled upon a set of unit tests that felt more like deciphering ancient hieroglyphs than reading code? What does this test do, and furthermore, what does the code its testing do? Today, we'll delve into the curious case of tests that are so obsessed with the DRY principle (Don't Repeat Yourself) that they become even harder to read and maintain.

The Dilemma of DRYness

The DRY principle is a fundamental tenet of software development, advocating for code reusability and avoiding redundancy. However, like any good thing, it can be taken too far. Some programmers wield the DRY principle like a sacred sword, slashing away at any hint of repetition without considering the readability and maintainability of their code.

If you have a hammer, everything looks like a nail. We get so used to writing DRY code production-side, that we go overboard when writing our tests.

The Quest for Clarity

Tests should prioritize clarity, guiding developers through the functionality of their codebase. Tests should serve as documentation, so an obsession with DRYness in test code hurts your documentation. Instead of clearly delineating the setup, action, and assertion phases, overly DRY tests blur into an indistinguishable mess of abstractions and clever tricks.

An example

While this comes down to opinion, lets look at a set of tests that are overly abstract and hard-to-read. These tests are not too dissimilar from tests I've seen in various codebases.

package main

import "testing"

func makeAssertions(t *testing.T, name string, actual, expected interface{}) {
    if actual != expected {
        t.Errorf("%s: got %v, want %v", name, actual, expected)
    }
}

func runTest(t *testing.T, testName string, input []int, expected interface{}, calculatorFunc func([]int) int) {
    actual := calculatorFunc(input)
    makeAssertions(t, testName, actual, expected)
}

func runTests(t *testing.T, testCases []testCase, calculatorFunc func([]int) int) {
    for _, tc := range testCases {
        runTest(t, tc.desc, tc.input, tc.expected, calculatorFunc)
    }
}

type testCase struct {
    desc     string
    input    []int
    expected int
}

func newTestCase(desc string, input []int, expected int) testCase {
    return testCase{
        desc:     desc,
        input:    input,
        expected: expected,
    }
}

func TestCalculateSum(t *testing.T) {
    runTests(t, []testCase{
        newTestCase("case1", []int{1, 2, 3}, 6),
        newTestCase("case2", []int{0, 0, 0}, 0),
        newTestCase("case3", []int{-1, -2, -3}, -6),
    }, CalculateSum)
}

func TestCalculateProduct(t *testing.T) {
    runTests(t, []testCase{
        newTestCase("case1", []int{1, 2, 3}, 6),
        newTestCase("case2", []int{0, 0, 0}, 0),
        newTestCase("case3", []int{-1, -2, -3}, -6),
    }, CalculateProduct)
}

func TestCalculateAverage(t *testing.T) {
    runTests(t, []testCase{
        newTestCase("case1", []int{1, 2, 3}, 2),
        newTestCase("case2", []int{0, 0, 0}, 0),
        newTestCase("case3", []int{-1, -2, -3}, -2),
    }, CalculateAverage)
}

If you were trying to figure out how to use CalculateAverage I would guess the first thing you'd do in the codebase is look for usage. Maybe you'd grep for CalculateAverage and find the test case.

What does this test case tell you about CalculateAverage and how to use it? Basically nothing. Instead you'd have to start traversing the code. Here are the order in which you'd have to ask questions to figure out what's going on:

  1. What is newTestCase doing?
  2. What is runTests doing?
  3. How do the arguments of each of these functions map to the output?
  4. What is runTest doing? Oh it calls the passed-in func and makes an assertion on it.
  5. What assertions are made? Lets read makeAssertions.

Then you go all the way back to the test case itself and check the usage, probably making one or two guesses about the order of arguments, etc.

When we learn to develop production applications, the DRY principle is hammered into our brains. Any SDET or QA Engineer can tell you the problem here. This code is not DAMP (Deep and Meaningful Phrases).

Sometimes DRY vs DAMP is presented as a dichotomy, but these two principles work together in both production and test code. The difference is that we should value easy-to-read tests over non-repetitive tests.

Tests repeat themselves, a lot. And its not a bad thing, because the purpose is to test the same thing again and again with slightly different input values or setup. This repetition should just be isolated to your test file/bundle. The reasons we keep production code DRY do not apply to test code. Tests don't need to be orthogonal.

Each test should have a clear setup, execution, and assertion. This ensures that anyone reading the test can understand what is being tested, how it's being tested, and what the expected outcome is. Clear setup establishes the initial conditions or state required for the test. Execution represents the action being tested, often a function call or operation. Assertion verifies that the outcome matches the expected result. By having these distinct phases, tests become more self-explanatory and maintainable.

Tests should read like prose, where each line tells a clear and cohesive story of the software's behavior. Let's fix that.

How do we fix it?

Let's rewrite our tests such that each test case is defined directly inside the test function.

package main

import "testing"

func TestCalculateSum(t *testing.T) {
    testCases := []struct {
        desc     string
        input    []int
        expected int
    }{
        {"case1", []int{1, 2, 3}, 6},
        {"case2", []int{0, 0, 0}, 0},
        {"case3", []int{-1, -2, -3}, -6},
    }

    for _, tc := range testCases {
        t.Run(tc.desc, func(t *testing.T) {
            actual := CalculateSum(tc.input)
            if actual != tc.expected {
                t.Errorf("%s: got %v, want %v", tc.desc, actual, tc.expected)
            }
        })
    }
}

func TestCalculateProduct(t *testing.T) {
    testCases := []struct {
        desc     string
        input    []int
        expected int
    }{
        {"case1", []int{1, 2, 3}, 6},
        {"case2", []int{0, 0, 0}, 0},
        {"case3", []int{-1, -2, -3}, -6},
    }

    for _, tc := range testCases {
        t.Run(tc.desc, func(t *testing.T) {
            actual := CalculateProduct(tc.input)
            if actual != tc.expected {
                t.Errorf("%s: got %v, want %v", tc.desc, actual, tc.expected)
            }
        })
    }
}

func TestCalculateAverage(t *testing.T) {
    testCases := []struct {
        desc     string
        input    []int
        expected int
    }{
        {"case1", []int{1, 2, 3}, 2},
        {"case2", []int{0, 0, 0}, 0},
        {"case3", []int{-1, -2, -3}, -2},
    }

    for _, tc := range testCases {
        t.Run(tc.desc, func(t *testing.T) {
            actual := CalculateAverage(tc.input)
            if actual != tc.expected {
                t.Errorf("%s: got %v, want %v", tc.desc, actual, tc.expected)
            }
        })
    }
}

The revised code provides several benefits:

Readability: Each test case is self-contained and easy to understand. The setup, execution, and assertion are all clearly visible within each test case. This makes it easier for other developers to understand what each test is doing.

Maintainability: Because the test cases are defined directly in the test functions, it's easier to add, remove, or modify test cases. There's no need to navigate to a separate helper function or data structure.

Simplicity: The revised code removes the need for helper functions and complex structures. This makes the code simpler and easier to understand.

Flexibility: Each test case can be run independently, which can be useful for debugging. If a test fails, you can run just that test to isolate the problem.

Transparency: The assertion is made directly in the test case, making it clear what is being tested and what the expected outcome is. This transparency can help prevent bugs and make the tests more reliable.

Conclusion: Finding the Sweet Spot

Clever is not Clear.

As we conclude our journey through the maze of overly DRY tests, let's remember that the true measure of code quality isn't how cleverly we can avoid repetition, but in how effectively we communicate the intent of our code. So, the next time you find yourself tempted to abstract away every iota of redundancy, take a step back and ask yourself: "What does this test really do?"

In the end, clarity trumps cleverness. Let's strive for tests that are as refreshing as a summer breeze, not as dry as the Sahara desert.

I'm parched.