Replacing test frameworks with test libraries

I have learned that the hard way. I share my experiences together with suggestions where and how to start.

Test frameworks provide scaffolding for building automated tests: domain-specific vocabulary to describe your business scenarios, loggers to generate test results in a standardized format, or “glue” to talk to various services. Such already implemented elements of the framework initially speed up scripting automated tests.

One test framework may work fine for multiple teams testing similar products and working in a similar way. However, in big organizations, different teams develop and test significantly different things and work in their own ways. In my current company development culture is no different from that. In such cases, teams feel tempted to add more and more functionality to the already bloated framework, which results in an anti-pattern called Frankenstein’s Framework or Wunder Framework. Such frameworks illustrate needless complexity and immobility – they become hard to use and maintain.

I have learned that the hard way. However, I did not want to “throw the baby out with the bathwater”. I found routines common for different testing teams, but each team was still automating the same routines from scratch. For instance:

  • In many teams test scripts start from authenticating to our flagship product – the authentication routine automated by one team can be shared via an authentication test library and reused by others.
  • Applications use common set of UI components provided by the UX team, e.g., tables, and testers from different teams repeat routines to access those components, e.g., sort and read values from certain column and row of the table – Selenium/WebDriver wrappers for the UI components can be shared as a UI test library.
  • Communication between microservices in a distributed system is naturally sensitive to network errors and temporal outage of microservices; request retry is a common pattern to handle such issues; a test library that implements this pattern for a popular test library like REST-assured can be shared with all the teams performing REST service testing.
  • Different teams take different approach to handle tests failing because of an already known bug – some prefer to include results of such tests in a report, others prefer to exclude them; a test library that enables running or running, depending on the configuration, of such tests was found useful by both types of teams.
  • Behavior of the system depends on user privileges, group membership and other properties of a user; a test library that finds or creates test users of certain properties can be helpful to many teams.

Teams can use one or more libraries, but it is up to each team which ones to pick up.

Do One Thing and Do It Well

A test framework does multiple things such as generating test data, mocking external systems, logging debug information, and handling interaction with Web page UI. A test library should do only one of those things and not the others. For instance, a mocking library would support mocking system X.

The concept of tools doing only one thing comes from the Unix philosophy on how to build software. It has been present in the software engineering industry for more than 40 years and stands in opposition to building systems as monoliths.  Its benefits have been widely discussed but only now, as I’m writing this post, they finally seem obvious to me. It has taken me much time to understand why moving from monolith frameworks to test libraries is worth the effort.

With a large system, it’s hard to find one single person that knows, on his own, how all of the pieces such as authentication, UI navigation, report generation, and database population work together. A similar problem occurs with frameworks for testing such systems, it’s nigh impossible for any one person to have the breadth and depth of experience required to understand how all parts of the framework work. Splitting code from the framework into common libraries lets testers specialize in their particular strengths. For instance, my team had enough expertise in authentication layer of our flagship product to develop an authentication library for testing purposes. Another team focused on building a library for testing UI reports related to traffic billing, their domain of expertise. Each team has focused on one thing and did it well!

With a library that does only one thing, fixing bugs and adding new features is easier. Rather than working through a complex monolithic test framework and worrying about complex regression testing, a maintainer of the library can focus on a single small set of functionalities. I once contributed to the test framework with a feature I needed in my tests. It took significant time to release its new version with my changes. Making sure changes didn’t impact users of the framework took days and thus we released new versions infrequently. With a library, the process of development and releasing is much faster.

Another benefit of libraries is the possibility to compose functionalities together. Imagine, you would like to send an HTTP request to a protected resource requiring authentication. The request fails with an error and your colleagues want to reproduce the problem with curl command line tool. The whole functionality can be achieved by composing three different libraries: REST-assured (for sending HTTP requests), internal authentication library (for authentication and signing HTTP requests with session tokens) and curl logger (for printing curl commands).

// Authentication library
Session session = new RestAssuredAuthClient(baseUri).authenticate(user, password);

  // CURL library
  // Authentication library
  .filter(new RestAssuredSigningFilter(session))
  .formParam(“startDate”, “2018-09-05”)

Identifying areas for test libraries

If a library should do only one thing, then you probably ask yourselves what could be these single things in test automation. Here are a few ideas that come to my mind:

  • generating test data as randomized email messages or finding test users in the system,
  • simulating sub-systems not available for testing, for instance, a library mocking external Single Sign-On providers like Google Sign-In,
  • simulating a user interacting through GUI or simulating a client application communicating via API, for instance, a library authenticating a user through UI or a library booking a flight through SOAP Web service,
  • generating diagnostic information to help troubleshooting failing tests, for example, a library that prints HTTP requests submitted with HTTP client as curl commands,
  • detecting certain kinds of error conditions in a product, for example, a library with assertions checking a state of a user in a database,
  • recording and replaying certain events, for example, a library recording HTTP traffic from production and replaying it in a test environment.

Note that test libraries can be specific to your product or can have a wider audience. For instance, Selenium library provides a base for interacting with any Web UI, while the Luna Portal reporting library, built on the top of Selenium, provides routines for interacting with UI of our specific system.

How to find potential areas for a testing library in your current project? I have learned that this is an organic process. Usually, when the project starts I do not have enough knowledge about the domain and the internals of the system. Automated scripts for testing grow slowly and are frequently refactored as my initial assumptions about the system and the domain often turn out to be wrong.

It is not a bad thing if both test scripts and routines used by those scripts live in the same repository – it eases frequent refactoring. I try to follow the rule of three, that is when the same code is used three times or more, I extract it into a separate procedure or a helper class. It takes time to understand whether those classes are stable enough to be factored into a separate test library.


Sharing libraries

We have built a number of such libraries in one of our teams responsible for building the authentication layer. The initial goal was to ease scripting regression tests for authentication, but ultimately the libraries proved useful for other teams. Two have been open sourced and are used by both customers building applications on top of the company infrastructure and by the online community testing REST services unrelated to my current employer.



The whole endevour with introducing test libraries was a joint of effort of my team at Akamai. Many thanks to, in no particular order: Anatoly Maiegov, Mariusz Jędraczka, Krzysztof Głowiński, Martin Meyer, Antonio di Maio, Bartłomiej Szczepanik, Patrizio Rullo, and Chema del Barco.

Teaching test automation: industry meets academia

On April 11, 2016 I gave a lecture at Krakow’s AGH University of Computer Science about “Test Automation: More than Automating Tests”. I tried to answer the following questions:

  • What is test automation and why use it (or why not)?
  • What are small and large examples of test automation in industry?
  • What challenges are there in test automation in industry?

The lecture was part of a 4 open lectures program on Testing Web Applications in collaboration with Akamai Technologies, where I work. It was followed by laboratories with students. I taught how to test a distributed application with both Web UI and REST API:

  • Black box testing of the application
  • Automating GUI tests with Selenium
  • REST API white box testing with curl and  REST-assured
  • Mocking bank REST API with ServerMock


All materials from the lecture and laboratories are now publicly available:


Why do I do this?

In my previous post I tried to understand why so many candidates fail to get a technical tester job positions. Part of the problem I found through the survey was because candidates do not have enough technical skills and those who have do not apply for such job positions.

Obviously, there are many reasons why this happens, but I would like to focus on the following two:

  • Universities have recognized recently testing as a career path but do not teach practical skills industry needs.
  • Testing as a career has a bad PR.

To better picture that let me quote here one of surveyed subjects:

Many companies looking for testers claim they need someone to automate or able to learn automation and then it turns out that it is only about manual tests, and automation was just PR, which was supposed to persuade a candidate to accept a contract. When you sign a contract, hardly anyone resigns.

I have chosen those two reasons because those are the two I am able to help with. Particularly, I can share my experience about companies where test automation was a real thing and, in fact, interesting and challenging. The skills I learned there are also something that I could teach future testers.


Many thanks to Bartosz Kwołek and Marek Konieczny from AGH university, and to Chema del Barco, Małgorzata Janczarska, Piotr Szpor, Ela Sermet, Michał Pażucha and others from Akamai for the feedback on my presentation and laboratories. I owe also many thanks Patrizio Rullo and Félix Cachaldora Sánchez for helping with implementing application in Ruby (I didn’t know Ruby at all before!).

Isolating bugs with REST services

I describe a library that helps testers and devs isolate bugs found with REST-assured.

REST-assured is a popular Java library that makes testing REST services easier. For the last few months we have used it to automate tests for a complex authentication flow. Large part of tests consisted of multiple REST calls, checking system behaviour in different states. Finding bugs was one challenge but reproducing them was another. Particularly, we struggled with:

  • Bug isolation. You’re trying to isolate a bug you found in a long test with REST-assured but you don’t want to replay whole scenario over and over. You just want to replay one step and see a HTTP response.
  • Pair debugging. Together with a dev you’re debugging a bug. She’s putting breakpoints in her code and asking you to replay your REST-assured test over and over because she does not have downloaded it yet.
  • Bug reporting. A bug you have found must be reported in a bug tracking system and you need to quickly describe steps to reproduce it. While for UI it is often fairly easy to list clicks and choices, for a backend system describing API calls is harder.

All those problems can be addressed with cURL. cURL is a popular command line tool available for many platforms and with many *nix distributions it comes pre-installed, so a dev can easily reproduce an issue. Initially, I was crafting those curl commands manually but this scaled poorly. Other platforms for testing offer automatic generation of curl commands from HTTP requests (Chrome Developer Tools, Postman add-on for Chrome, Firebug add-on for Firefox, Ok2Http client), but REST-assured was missing that.

So I’ve created a small library, curl-logger.

How does it work?

Imagine you’re trying to send HTTP GET request to with REST-assured:


curl-logger will log it as the following curl command:

[code lang=text]
curl '' -H 'Accept: */*' -H 'Content-Length: 0' -H 'Host:' -H 'Connection: Keep-Alive' -H 'User-Agent: Apache-HttpClient/4.5.1 (Java/1.8.0_45)' –compressed –insecure –verbose

You will find more details how to setup a library on the project page.

Waiting for your feedback

The library generates curl commands from an actual request rather then only request specification you wrote, so if a HTTP client in your test accepted server cookies, they will be included by curl-logger as well. Currently, the library supports multiple HTTP methods, multipart/form and multipart/mixed content types and handles file attachments.

If you find it useful, encounter a bug or miss a feature, let me know.

Technical testers: mistakes to avoid during interview

Finding a technical tester is hard for many companies. Through the survey with 25 testers I’m finding why.

Last month a work mate from another team was fired. I knew him a bit and I remember he felt frustrated and overloaded. He said they wanted him to do a very technical checks like troubleshooting a complex distributed system that, in his opinion, was rather a developer’s task. Curious of other party’s opinion, I asked his manager why he fired him. – He couldn’t cope with his tasks. This position required developer’s knowledge but we didn’t know that when we were interviewing candidates.

This reminded me of how hard it was for my teams to find really good technical testers. We have spent several months interviewing numerous candidates and it was continued process of improving job descriptions and recruitment techniques. Still we were rejecting a lot of candidates.

I started to investigate whether it is only a problem of my company. Through the survey with 25 members of Polish testing forum I learned this might be a bigger problem for Polish IT job market. Average time to find a technical tester for survey participants was 3.6 months (ranging from 1 to 8 months) and during this time companies interviewed more than 8 candidate on average (ranging from 1 to 30) until the right candidate was found for a given position. This is pretty long recruitment process comparing to the time we usually spent on finding a developer. It significantly impacts the time necessary to build a good QA team.

So we know it is a bigger problem. But why it happens? In economical terms, demand for technical testers is much higher than supply. This answers, however, does not explain why supply is so low.

In this post I will try to answer that question based on the survey results and follow-up discussions with other testers from the community. If you think about a career of a technical tester, I hope you will learn what mistakes you can avoid during an interview and what skills and habits are worth learning. I also hope this post will raise some awareness of what kind of projects a technical tester might be good for and how to interview them. The history I started with suggests that oftentimes teams may not realize what type of a tester they need.

Why candidates were rejected?

In the survey people involved in recruitment process voted for most common reasons of rejecting candidates for technical testers positions. They could mark more than one reason. They could also add their own reasons. In the table below I list top 10 reasons for rejection together with a number of people who voted for each one. Basically, those reasons can be split into two groups: insufficient skills and bad attitude. I will explain in details what I mean by that and give examples from my and other participants experiences.

Main reason for rejection Confirmed
Too small programming skills 17
Does not want to be a tester 9
No experience with the technology in the new project 9
Does not know how to design a test cases of sufficient coverage 9
Does not know an architecture of a system tested in a previous job 8
Cannot look for the cause of a bug 6
Does not want to work with a developer in the design of tests 5
Does not want to look for the cause of a bug 4
Does not want to work with a developer at reproducing bugs and finding their causes 3
Little skills to design test cases 2

No programming skills

Many candidates have missed basic programming skills. During interviews we often task testers with implementing a simple function that requires combining if/then clauses with loops. Surprisingly, many candidates – who had been scripting Selenium tests in Python, Java and other programming languages – have failed to implement simple loops. One of the survey participants explained this might be because 50% of candidates who wrote “automation” in their CVs meant test scripts that were written by someone else, while candidates only executed them and analysed their results.

I have also learned that some testers lack clear code programming skills: they write Selenium tests that are hard to review, hard to debug, and hard to repair. Some months ago we have asked a candidate to automate a test scenario for a simple application. However, automated tests contained so many anti-patterns (see my another post on that topic) that we had to reject the candidate.

Why those skills are important tome ? A technical tester:

  • will automate tests on different levels and for different application layers, including backend APIs and isolated components and thus she must be able read existing API definitions and components source code,
  • will write libraries to support test automation, where (“Lessons Learned in Software Testing: A Context-Driven Approach”, lesson 126.):

“Useful libraries require stronger design principles than just avoiding the repetition of code.[…] We have reviewed test suites built with hodge-podge libraries on several occasions. The results are never pretty.”

I don’t want to be a tester!

I remember well an interview with one very honest candidate:
— Do you have any questions to us?
— Yes. Is there a better job here?
— Better than…?
— Better than tester.
— For instance?
— Manager’s job. I would like to become a test manager in a year.

This candidate hasn’t demonstrated great technical skills and had unrealistic expectations. It seemed to us that he did not want to be a tester and considered this profession as worse than that of a programmer or a manager.

In fact, the second most common reason to reject candidates by survey participants was when a candidate did not really want to be a tester. It rather appeared that he or she would treat this position as a transition to be a developer or a test manager. While, in principle, switching a role is not a bad thing, a perspective of such switch in close future brings a risk that a candidate will not treat his or her new job seriously and soon the team will need to seek for a new tester.

That’s not my problem!

In Agile teams responsibilities of testers and developers often overlap and same task can be performed by both groups. For instance, isolating a bug reported by a tester is an example of such a task and there are factors that make a tester often a more suitable person to do that. Danny R. Faught in his article “How to Make your Bugs Lonely: Tips on Bug Isolation” lists such factors, concluding:

“In most of the organizations I’ve observed, I believe the testers should have been doing more bug isolation, especially for severe bugs.”

I observed that it is often more effective when a developer, a customer and tester work together to isolate a bug. Hence, during a job interview I play a role of a customer or a developer and ask a tester for help. Many candidates do not know how to look for the cause of a bug, do not know where to look for it and what tools to use to diagnose the problem. Surprinsingly, some testers do not even want to look for the cause of a bug during an interview. Many of them mentioned they had never paired with a developer to reproduce a bug, saying that in their current or previous companies it was a developer’s responsability. It might be a case that when you believe that isolating a bug is not your problem, you may also loose opportunity to learn from developers how to do that.

In total, lack of skills or will to look for a bug cause (alone or with a programmer) was mentioned 13 times (=6+4+3) as a reason to reject a candidate.

Insufficient testing skills

The book “Cracking the coding interview” offers the following task to guage whether a tester can test: “We have the following method used in a chess game: boolean canMoveTo(int x, int y), x and y are the coordinates of the chess board and it returns whether or not the piece can move to that position. Explain how you would test this method.”

However, many candidates fails to solve this or similar tasks. They do not know how to design a test cases of sufficient coverage or has little testing skills in general. Suprisingly, many ISTQB-certified testers have this problem despite the fact, that ISTQB Foundation Level covers techniques for covering testing space (Equivalence Partitioning and Boundary Value).

I don’t care how the system works

9 times a reason to reject the candidate was he or she did not know an architecture of a system tested in a previous job. I often ask candidates how the system was built and how they tested it on different levels, because knowing system architecture

  • helps in isolating bugs
  • helps in resolving environmental problems because you know who to go to to resolve problem with a specific component
  • helps in automating tests on different levels
  • suggests you worked closely with other developers
  • helps in discovering integration bugs (you know where they can occur)

Note, I do not claim that a tester should forget about black-box testing and end-user perspective because:

“if your primary focus is on the source code of tests you can derive from the source code, you will be covering ground the programmer has probably covered already, and with less knowledge of that code than she had. […] The advantage of black box testing is that you probably think differently than the programmer, and thus, are likely to anticipate risks that the programmer missed.”
(from “Lessons Learned in Software Testing: A Context-Driven Approach”, lesson 22).

Threats to survey results validatity

I do not claim the survey to be statistically valid. The results can be biased, because a sample of surveyed subjects may not be representative for the whole job market. However, the output demonostrates that finding a good technical tester is hard also for other companies. Also, I have no hard data about time required to find a developer. It is just experience of my team and HR department that usually spends less time to find a backend developer.

Take-away message

Having read this post you may ask who actually a technical tester is. I have never clarified that term nor have agreed on it with survey participants but I think the best way to describe such testers is by their skills I have listed in the post. They should be good at progamming, like being a tester and commited to that profession. They should also be able to work closely with a developer in test case design, test automation and bug isolation. They should be creative about inventing test cases and willing to understand how a system under tests works.

And why do you think it is hard to find a technical tester? I’d love to hear what you think about this issue. Let me know in the comments.

P.S. In the next post I plan to discuss why candidates for technical testers may not have such skills and why engineers with right skills do not apply for technical tester positions.

Anti-patterns in test automation

End-to-end tests are known to be flaky. Addressing anti-patterns appearing in such tests can make your tests more reliable.

As discussed by Mike Wacker from Google, end-to-end tests are known to be flaky, run for a long time and when they fail, it is hard to isolate failure root cause. A part of those problems stems from the anti-patterns appearing in such tests and addressing those anti-patterns may make your tests more reliable, more useful in isolating root causes and cheaper to maintain.

The following list of 8 anti-patterns comes from my test automation experience. Some I found in legacy test suites me and my teams inherited. Other were committed by candidates I interviewed for testing positions. Selected ones comes from our fellow developers who helped us in test automation.

Hardcoded test data (#1)

That usually happens when you start small and think small, without remote perspective in mind. Let’s imagine you’re testing authentication in your system with a sample user:

[code lang=”java”]
String testUser = "";

When coming back to the test code a month later you might ask yourself, why you wanted to test with this particular user? Is it because you wanted to test for a user that is inactive? Or maybe for a Japanese-speaking user? Hard to guess. And when the test suite grows up to several hundred test cases, maintaning hardcoded test data becomes a nightmare. How do you handle it? Similar way you handle magic number anti-pattern.

Hardcoded environment configuration (#2)

Imagine same checks must be run against both Firefox and Chrome; or against local and then pre-production test environments. No way to do it if you hardcoded references to browser type, server host or databases. A solution is to make your tests environment-agnostic and provide configuration to the test at runtime, e.g., by reading it from configuration file. Additionally, updating configuration will not require modifying multiple files.

Taking environment state for granted  (#3)

Taking environment state for granted is often over optimistic. Unlike in unit tests, end-to-end setup gives little control over test environment state, particularly when it is shared with other teams. When the test starts failing it might be because of a new bug introduced or because environment is not in a state your test needs. For instance, a user you use for tests has been locked out by another team or flight schedule has changed and no longer you can use a connection from London to Los Angeles in your tests. There are a number of ways to handle such issues:

  • setting system environment in a certain state, e.g., creating a test user,
  • finding test data matching a specification in existing environment, e.g., a roundtrip flight,
  • checking environment is in required state and skipping a test if it is not.

Each of those solutions can be done manually before each test run but in case of large number of tests and dynamic environment it simply does not scale well. An alternative is to automate one of those solutions. The last one is usually the easiest to implement and saves execution time. It does not garranty test data for your test but it will skip (not fail!) the test immediately when it is clear it will provide no useful feedback. JUnit’s assumeThat construction can be intuitive here:

[code lang=”java”]
assumeThat(testUser, existsInSystem());

Conditional assertions (#4)

Some people are aware that environment state may change, so they try to make their test verifying different things depending on the environment state:

[code lang=”java”]
if (existsInSystem(testUser)) {
// test for existing user

} else {
// test for not existing user


However, this is a shortsighted workaround as it makes your test non-deterministic: you will never be sure which path will be verified in the next pass. In extreme case, if the environment is always in the same state, only one execution path will be tested. In general, there’s no reason to have one test method if you’re testing two different outcomes.

Conspiracy of silence   (#5)

When assertions in tests are failing with almost no clue why

[code lang=”text”]
Expected: true
Actual: false

it is hard to isolate a root case of a failure. This happens when using simple assertions like assertTrue or assertEqual. A better solution is to use custom matchers in combination with custom messages:

[code lang=”java”]
assertThat("Account with debit is missing", accounts, contains(expectedAccountWithDebit));

No traces left (#6)

Once a test fails you will need to understand what has happenned before. However, if you don’t want to know that, follow this anti-pattern:

  • Leave no traces of what your test has done.
  • Don’t report intermediary states of the system in the middle of your test execution.
  • No screenshots, no photos, no paparazzi, particularly when you use headless Web browsers like PhantomJS. It must remain headless, right!
  • No HTTP traffic recorded, no HARs, no curls to reproduce traffic, etc.

Jokes apart, the goal of addressing this issue is to ease reproducing the problem you found with the smallest cost. Running same test again and debugging the test and the system under test again and again is usually expensive and can be ineffective for intermittent bugs.

Tests cluttered with business logic (#7)

Tests that mix details of system business logic with steps of test scenario are hard to read and maintain. A solution is to separate what the test is testing from how it is doing. In software development this separation of concerns is known as encapsulation. I found a number of ways to do it in test automation:

  • Page Object pattern, hides Web page details from a tester,
  • domain-specific assertions and matchers, hides technical details of a check from a tester, e.g., UserExistsMatcher,
  • Domain Specific Language (DSL), describes test steps in a language of an end-user.

I found a good introduction to the latter two approaches in the article Writing Clean Tests – Replace Assertions with a Domain-Specific Language.

Sleeping for arbitrary amount of time (#8)

Waiting in your test 4 seconds

[code lang=”java”]

because your production system usually takes 4 seconds to go out over the network, take some data, and come back over the network with the result — this is baaad. That’s bad because your test becomes fragile to network congestion: it will start failing when network latency increases. What you actually intented is to wait until “a response is returned” or “an object appears”, using explicit and implicit waits (in Selenium) or active pulling in general (e.g., with Awaility library).

Obviously, there are exceptions where explicit sleeping is fine.

Take-away message

Presented anti-patterns demonstrate that writing system tests is a slightly different beast than writing unit tests. Sure, some anti-patterns like a Wet Floor can happen both here and there. However, in this post I have focused on anti-patterns specific for end-to-end tests. I have no funny names for those an yet, so if you come up with any let me know.

I’m waiting for your feedback! Do you agree or dis-agree with some anti-patterns? Or maybe you encountered some others?