Post banner
DevSecOps 8 Min Read

How to Write an Automated Test Framework in a Million Little Steps

“Creating an automated regression test suite is not a simple matter. Crafting an automated test framework on which future tests can be built can be a complex process. You are writing software to test software, and like constructing any software product, it cannot be done in one fell swoop: There are a million little steps to consider.” — T.J. Maher, Software Development Engineer in Test at Threat Stack

Trying to make sure that adding a new feature to your web application didn’t break any of your old functionality? When testing, there are two things you need to focus on: verifying that the new feature works according to design, and validating that functionality has not regressed to a state where what once worked is now broken.

After a software tester figures out how to test a new piece of functionality, tests that may need to be executed repeatedly can be moved to the regression test suite, to be executed when needed. Verifying that all the regression tests pass after each and every new change has been deployed to the Threat Stack Cloud Security Platform® web application can be a time-consuming process.

When I worked on the front-end development team as a software tester, I had to verify that the following features were still working properly:

  • Login: Are only authorized users able to log in? Are proper errors thrown when unauthorized users make an attempt to log in?
  • Servers: Can you still see on the servers page the virtual server fleet being monitored by our application? Both Linux and Windows virtual environments? 
  • Events: A server’s event data is processed by Host, File, CloudTrail, Threat Intelligence, and custom rules to throw alerts, broken down by severity, to help our customers’ DevOps and Security teams analyze what is happening within their systems. Does our application still catch events such as when someone logs in, views a file, or changes permissions on a file? 
  • Rules: Can you create a business rule that will process an alert when an event happens on one of the servers? How about rules for events happening in containers, as in Docker environments or Kubernetes? 
  • Alerts: Can you navigate through existing alerts listed on the Alerts page, and locate the corresponding events that triggered the alert thrown? Can you still dismiss an alert that has a lesser importance? Does the proper histogram appear with the correct labels? 
  • Dashboard: Does the dashboard still display all alerts that have occurred in the past 24 hours? How about servers that have vulnerable packages on them?

As a test engineer on the front-end development team, I tasked myself to create an automated regression test suite to help execute tests that should be continuously run to monitor and inform me on the state of the application.

Creating the automated regression test suite would not be a simple matter. Crafting an automated test framework on which future tests can be built can be a complex process. You are writing software to test software, and like constructing any software product, it cannot be done in one fell swoop: There are a million little steps to consider. 

This article will walk you through a few of the questions I had to ask myself when I was tasked with building the first draft of our automation framework at Threat Stack while I was working as a tester on the front-end development team. 

The first question I had to answer was… “Who would I be building the automation for?”

Who Are the Stakeholders for the Tests?

After numerous conversations with the development team, I started to realize that many people had a vested interest in the automated test framework I was going to be building:

  • Product owners wanted to make sure that the old features and functionality were still working as expected.
  • Front-end developers wanted to make sure that we could identify whether some subpage was broken due to a change they made.
  • Members of the user experience team needed to verify that a user could still traverse the web application. 
  • Backend developers wanted to know that data was being passed correctly to the user interface. 
  • Developers and test engineers outside our team wanted to be able to kick off the tests themselves in order to verify that their work hadn’t accidentally broken any other functionality. 

Once I knew who the stakeholders were and what they would be looking for in an automation framework, I wanted to make sure that the automation framework was adding value and addressing what they needed. 

How Could the Needs of Stakeholders be Addressed? 

To make sure that each stakeholder would have their needs met, I promised that I would use the same process in writing the automation test framework that I followed when writing tests:

  • Tests would be written as stories with bullet pointed steps, stored in JIRA. 
  • Developers, product owners, and members of the user experience team would help develop a backlog of stories we could pull into a two-week sprint of work. 
  • Before any code was written, the stories would be reviewed, first by members of the test team, then by the developers, designers, and product owners.
  • After the code was written, it would be reviewed by all my fellow test engineers: Those more senior than me could provide guidance, while those more junior could be brought up to speed faster before they had to add their own tests to the framework. It would also be reviewed by subject matter experts on whatever programming language I chose. 
  • After every two-week sprint, I committed to demoing the work accomplished, soliciting more feedback. 

For the initial stages, I would be spearheading the project. But the next question was: “Who would own it?”

Who Owns the Automation Framework?

Before deciding what programming language the automation test framework should be built in, or what toolset I should use, I had to find the answer to two questions:

  • Who would be building the automation framework?
  • Who would be maintaining the automation?

Even though I would be spearheading the initial design of the automation, after consulting with my fellow test engineers, I discovered that they would need the ability to add their own unique tests after I put the basic framework in place. And after including management in the conversation, I found it would be the test engineers who would be responsible for building and maintaining the automation framework once it was stood up. 

Once I knew who would be responsible, that helped me answer the next question …

What Programming Language to Use?

There are many different programming languages out there, each with its own quirks. Which one should I use? Should I use Java, the language I was most familiar with? JavaScript, to match the language the front end was coded in? Or should I use Ruby, the language of our existing test framework for our Threat Stack API?

Because I went through the exercise of answering the question “Who Owns The Tests,” I had my answer. The language of choice amongst the test engineering team was Ruby; therefore the automation framework should also be written in Ruby. 

Although I did not know any of the Ruby programming language at the time, I received a commitment from some senior developers who were Ruby subject matter experts that they would be able to give me hands-on training as needed and also let me tap them on the shoulder for code reviews. 

Now that I knew the programming language I would be coding in, and who would help guide me on my journey, I had to figure out: What automation toolset should I use so the tests written could interact with a web browser? 

How to Interact With the Web Browser?

Many different types of automation toolsets are available, when it comes to testing a web-based user interface, that enable the test engineer to allow the tests written to manipulate a browser like a user would with a mouse and keyboard. These toolsets already allow you to simulate clicking on links, hovering a mouse over a part of the web application, pressing keys on a keyboard, and clicking on buttons. 

Selenium WebDriver is at one end of the spectrum. It’s a framework built by an amazing group of volunteers and works with the Java, Python, Ruby, JavaScript, and C# programming languages. Test engineers can compose a test framework out of the building blocks Selenium WebDriver provides, customizing how they handle any slow loading web components on a web page, deal with errors that may happen as the test is run, log about what is executed, and report the results of the test.

Toolsets such as Watir and Capybara — both built for the Ruby programming language in mind — are at the other end of the spectrum. They are Domain Specific Languages (DSL), higher level abstractions of common actions one can do when interacting with a browser. For example, with Capybara, you can use built-in methods to spin up a Chrome browser and visit a page, fill_in a textbox, click_button, click_link, or select a dropdown. 

For a previous job as an automation developer, I had already used straight Selenium Webdriver, writing my own methods in Java to handle slow-loading components and handle errors. This time around, I wanted to use an already existing DSL instead of writing my own, in order to save time. I selected Capybara instead of Watir only because I would receive a bit of in-house support with that toolset if I was ever stuck. 

Want to learn more about Capybara? See the free online Introduction to Capybara course T.J. Mayer wrote for Test Automation University

How to Set Up Tests?

Now that I knew: 

  • The programming language for the test framework would be Ruby.
  • The tests would interact with the browser by using Capybara.

All I need to figure out is how to actually set up the acceptance tests themselves. These tests provide living documentation on how a web application works, capturing whether it matches the expected behavior. 

Answering this part was easy: For our automated test suite testing the Threat Stack API, we were using a new toolset by Thoughtworks called Gauge. For the web application testing automation, we could use the same tool. 

A test in ThoughtWorks Gauge has two main parts:

  • The test specification, written in plain language, is a step-by-step list of instructions in a bullet-point fashion on how to carry out the test. The specification should read like a manual test plan, clear and concise with enough description that others can follow.
  • The step_implementation folder includes the code executing the test, wrapping each step in a code block that can be reused whenever and wherever it is called in the specification. 

By having Gauge separate the test plan from the code that executes the test, it makes our tests more readable and maintainable, allowing test steps to be shared between our tests.

Need more information? Read T.J. Maher’s article, Testing Tool Profile: Why Threat Stack Uses ThoughtWorks Gauge.

What’s Next . . .

As you can see, when drafting the automated test framework for the Threat Stack Cloud Security Platform, I covered a lot of ground researching who the stakeholders were and what they needed, and what tools and toolsets worked best with the development and testing teams. Imagine if I hadn’t! I could have risked delivering a framework that may have been tailored to my wants and needs, but was not usable by the other test engineers. By doing the necessary requirements gathering and the research on how to integrate with test automation toolsets the test engineering team already supported, I was able to make sure I delivered a product that added value to our company. 

In my next article, we will finally get to the good stuff: Coding! Specifically, we will cover the steps I followed when creating our automated testing suite for the Threat Stack web application:

  • Writing a Proof of Concept
  • Reviewing Work With the Stakeholders
  • Building Out the Framework Incrementally
  • Setting Up Tests to Run Continuously

Until next time, Happy Testing!

If you want a great way to keep up with current security issues, tune in to Paul’s Security Weekly podcast — a podcast by and for security professionals. In this segment, Paul Asadoorian discusses TJ Maher’s “How to Write an Automated Test Framework in a Million Little Steps” post starting at 12:50.