Anti-patterns in test automation

End-to-end tests are known to be flaky. Addressing anti-patterns appearing in such tests can make your tests more reliable.

As discussed by Mike Wacker from Google, end-to-end tests are known to be flaky, run for a long time and when they fail, it is hard to isolate failure root cause. A part of those problems stems from the anti-patterns appearing in such tests and addressing those anti-patterns may make your tests more reliable, more useful in isolating root causes and cheaper to maintain.

The following list of 8 anti-patterns comes from my test automation experience. Some I found in legacy test suites me and my teams inherited. Other were committed by candidates I interviewed for testing positions. Selected ones comes from our fellow developers who helped us in test automation.

Hardcoded test data (#1)

That usually happens when you start small and think small, without remote perspective in mind. Let’s imagine you’re testing authentication in your system with a sample user:

String testUser = "mgawinecki@tokyo.jp";

When coming back to the test code a month later you might ask yourself, why you wanted to test with this particular user? Is it because you wanted to test for a user that is inactive? Or maybe for a Japanese-speaking user? Hard to guess. And when the test suite grows up to several hundred test cases, maintaning hardcoded test data becomes a nightmare. How do you handle it? Similar way you handle magic number anti-pattern.

Hardcoded environment configuration (#2)

Imagine same checks must be run against both Firefox and Chrome; or against local and then pre-production test environments. No way to do it if you hardcoded references to browser type, server host or databases. A solution is to make your tests environment-agnostic and provide configuration to the test at runtime, e.g., by reading it from configuration file. Additionally, updating configuration will not require modifying multiple files.

Taking environment state for granted  (#3)

Taking environment state for granted is often over optimistic. Unlike in unit tests, end-to-end setup gives little control over test environment state, particularly when it is shared with other teams. When the test starts failing it might be because of a new bug introduced or because environment is not in a state your test needs. For instance, a user you use for tests has been locked out by another team or flight schedule has changed and no longer you can use a connection from London to Los Angeles in your tests. There are a number of ways to handle such issues:

  • setting system environment in a certain state, e.g., creating a test user,
  • finding test data matching a specification in existing environment, e.g., a roundtrip flight,
  • checking environment is in required state and skipping a test if it is not.

Each of those solutions can be done manually before each test run but in case of large number of tests and dynamic environment it simply does not scale well. An alternative is to automate one of those solutions. The last one is usually the easiest to implement and saves execution time. It does not garranty test data for your test but it will skip (not fail!) the test immediately when it is clear it will provide no useful feedback. JUnit’s assumeThat construction can be intuitive here:

assumeThat(testUser, existsInSystem());

Conditional assertions (#4)

Some people are aware that environment state may change, so they try to make their test verifying different things depending on the environment state:

if (existsInSystem(testUser)) {
// test for existing user
...
} else {
// test for not existing user
...
}

However, this is a shortsighted workaround as it makes your test non-deterministic: you will never be sure which path will be verified in the next pass. In extreme case, if the environment is always in the same state, only one execution path will be tested. In general, there’s no reason to have one test method if you’re testing two different outcomes.

Conspiracy of silence   (#5)

When assertions in tests are failing with almost no clue why

Expected: true
Actual: false

it is hard to isolate a root case of a failure. This happens when using simple assertions like assertTrue or assertEqual. A better solution is to use custom matchers in combination with custom messages:

assertThat("Account with debit is missing", accounts, contains(expectedAccountWithDebit));

No traces left (#6)

Once a test fails you will need to understand what has happenned before. However, if you don’t want to know that, follow this anti-pattern:

  • Leave no traces of what your test has done.
  • Don’t report intermediary states of the system in the middle of your test execution.
  • No screenshots, no photos, no paparazzi, particularly when you use headless Web browsers like PhantomJS. It must remain headless, right!
  • No HTTP traffic recorded, no HARs, no curls to reproduce traffic, etc.

Jokes apart, the goal of addressing this issue is to ease reproducing the problem you found with the smallest cost. Running same test again and debugging the test and the system under test again and again is usually expensive and can be ineffective for intermittent bugs.

Tests cluttered with business logic (#7)

Tests that mix details of system business logic with steps of test scenario are hard to read and maintain. A solution is to separate what the test is testing from how it is doing. In software development this separation of concerns is known as encapsulation. I found a number of ways to do it in test automation:

  • Page Object pattern, hides Web page details from a tester,
  • domain-specific assertions and matchers, hides technical details of a check from a tester, e.g., UserExistsMatcher,
  • Domain Specific Language (DSL), describes test steps in a language of an end-user.

I found a good introduction to the latter two approaches in the article Writing Clean Tests – Replace Assertions with a Domain-Specific Language.

Sleeping for arbitrary amount of time (#8)

Waiting in your test 4 seconds

Thread.sleep(4*1000);

because your production system usually takes 4 seconds to go out over the network, take some data, and come back over the network with the result — this is baaad. That’s bad because your test becomes fragile to network congestion: it will start failing when network latency increases. What you actually intented is to wait until “a response is returned” or “an object appears”, using explicit and implicit waits (in Selenium) or active pulling in general (e.g., with Awaility library).

Obviously, there are exceptions where explicit sleeping is fine.

Take-away message

Presented anti-patterns demonstrate that writing system tests is a slightly different beast than writing unit tests. Sure, some anti-patterns like a Wet Floor can happen both here and there. However, in this post I have focused on anti-patterns specific for end-to-end tests. I have no funny names for those an yet, so if you come up with any let me know.

I’m waiting for your feedback! Do you agree or dis-agree with some anti-patterns? Or maybe you encountered some others?

4 Comments for “Anti-patterns in test automation”

Andy Gol

says:

Hi, another point which you can add to the “bad behavior” is repetition. Very often many tests repeat themselves, especially data-driven tests. In those cases repeatable lines of the tests can be replaced with one common keyword (method) where only arguments are variable.

In general nice list and beginning of something bigger.

Maciej Gawinecki

says:

Hi Andy, thank you for your feedback. I agree: data-driven approach can solve code repetition. I will definitely include your anti-pattern when I publish “Anti-patterns: Part 2” 🙂

There are also other ways to avoid repetition. Fortunately, test automation is just an example of programming and there are well known practices honored by decades of programming experience how to write clear code. However, I also know that testers, me included, often write ad-hoc hodge-podge libraries to remove code repetition. Writing libraries, helper classes, etc. requires good design. There’s a nice chapter about this in “Lessons Learned in Software Testing”, titled “Don’t build test libraries simply to avoid repeating code.”.

Felipe Carvalho

says:

Nice post, dude! One anti-pattern I sometimes find is the “write it as integration test because it’s easier”. I’ve met some people along the way that found unit tests boring because they had to write a lot of mocks and had the feeling they were writing test code that barely tested anything, and so they fancied writing integrations tests instead, because they would spend less time setting up pre-conditions for the piece of business code they wanted to push forward (e.g.: instead of mocking a collection of objects to be returned from the DAO, they’d just write a SQL script that would put everything in DB before the tests started). What these guys failed to perceive is that, over time, DB will change, making it a PITA to refactor those SQL scripts that exist solely for testing purposes. Besides, once people get used to write integration tests just because it’s “easier”, they keep doing it even for simple checks, that don’t actually need the DB to be up, or the application context to be initialized, leading to a really slow test suite, which, in turn, becomes less reliable with time, because it takes so long to run that people just gives up on running it frequently.

Looking forward to part 2 of this post! 🙂

Maciej Gawinecki

says:

Thanks for your feedback and mentioning this anti-pattern. You’re right, people think that having environment where they can automate everything will remove from them burden to maintain mocks, while actually they have to maintain environment (or worse, rely on other teams to do that). I think this anti-pattern is called ice-cream cone or inverted/distorted test pyramid. Are testers in your team mocking a lot?

Leave a Reply

Your email address will not be published. Required fields are marked *