Have you ever noticed that your automated regression tests are flaky and fail many times intermittently? Do you and the rest of your team no longer trust the outcome of automated regression tests? Do you have a difficult time changing tests when your application changes? Fear not, we're here to help you with some key best practices to make more flexible tests. Let’s get started!
Use Test Specific Selectors
Usually, a cause for the flakiness in an automated test happens when the elements that are used by the tests have changed as you or the others on your team work on the code as a normal part of your sprint process.
A fix for this is to use a test specific identifier in your HTML and in your automated tests. When an identifier is added into the code, instead of relying on the use of ids or classes that are already existing in your code, the test is immediately insulated from updates that can happen to the code in future sprints.
For this to work, however, there needs to be communication and a shared understanding among your team members that you are using this test specific identifier in the code.
<a class="u-btn u-btn--tertiary u-btn--is-small
c-editorial-top-story__cta"
href="/ca/en/women/footwear/boots/ankle-boots"
data-qaid="shop-ankle-boots">Shop ankle boots</a>
In the HTML, for instance, you’ve made it clear to everyone that the element is being used by the automated tests
data-qaid="shop-ankle-boots"
Using data-* ids also makes it clear to a screen reader that the attributes here is to be ignored, regardless of the identifier that you choose, I would suggest that you preface it with data-
Automate A Single Flow Independently
We’ve learned that it is a good practice to ensure that your automated tests do not rely on the output of a previous test, or data created in a previous test. Keeping your tests independent from each other helps to ensure that the behavior of your test will be the same regardless of when it is executed.
The other benefit of this independence is that the tests will not rely on each other, and therefore, your results will be consistent regardless of whether the tests are run alone, or in a suite of tests.
Keeping the tests independent makes it easier for you to combine them into different suites and therefore promotes reuse of the tests in many different ways. If you would like to learn more, please take a look at this: Hindsight lessons about automation: Test isolation principle.
Don’t Repeat Yourself (DRY)
When you’re writing automated tests and see that you are repeating the same set of steps in more than three places or so, it’s time to start thinking of ways to minimize the duplication of code that you’re writing.
For instance, if there is a repeatable set of steps that you often use in your code that isn’t part of your test framework API, create a reusable function that contains these steps and import it into your test file.
If there is a statement that is repeated more than once, then you should consider putting the repeated code into a reusable function. This helps with the overall maintainability of your code in addition to assisting with readability.
Don’t worry about making DRY code when you are writing a test, consider making it dry after the test is running, and before you submit the test for code review.
Take the snippet below for example
printAccountBalance(account) {
if (account.fees < 0) {
console.log(`Fees: -$${account.fees}`);
} else {
console.log(`Fees: $${account.fees}`);
}
if (account.balance < 0) {
console.log(`Balance: -$${account.balance}`);
} else {
console.log(`Balance: ${account.balance}`);
}
}
This code snippet can be simplified into something that's easier to read if we move the formatting of the currency over to a different function.
formatAmount(dollarAmount) {
const formatter = new Intl.NumberFormat('en-US', {
style: 'currency',
currency: 'USD',
});
return formatter.format(dollarAmount);
}
As you can see, we have managed to improve the overall readability and maintenance of our code by removing all of the if statements
printAccountBalance(account){
console.log ("Debits", formatAmount(account.debits))
console.log ("Fees",formatAmount(account.debits))
}
The concept of DRY code is a vast topic and we could spend hours talking about the importance of this principle when we’re writing code. If you would like to learn more about this topic, the references found here are a good place to start.
No Explicit Waits
When automating an asynchronous application, sometimes it is necessary to wait for a certain amount of time in order for some actions to complete. We’ve found that waiting for a set number of times can introduce flakiness in the tests as the item may not appear in the time that was set or it may appear before the timeout period that was specified. As a result, your tests will unnecessarily wait longer.
await setSignIn(userName,password);
const waitTime = 10000;
await t.wait(waitTime);
await t.click(provinceDropDownList.withText(provinceCode));
await setSignIn(userName,password);
const waitTime = 60000;
await t.expect(isSignedInConfirmation.exists).ok({ timeout: waitTime });
await t.click(provinceDropDownList.withText(provinceCode));
If waiting for an arbitrary amount of time is necessary in many places, create a polling method to check for the appearance of the item that you would like to see so that you can reuse the same method multiple times.
waitForSelector(selector, number = 1000) {
await t.expect(selector.exists).ok({ timeout: waitTime });
}
The automation code examples used here are written TestCafe, but the principles we’re talking about can be applied to any test framework.
Code Organization
Organize your code in a way that allows everyone to quickly find where things are located relative to the items that are being tested. The names of Files, Variables and Functions should be clear and describe the intent that the item is needed to perform.
The naming of things is really important because they help everyone to understand the function as well as helping with the maintenance of your code. Properly naming things helps everyone to understand the intent of the work that you were trying to perform without diving into any particular function.
When choosing a name for a method, pick a particular verb and use the same verb to refer to the same set of actions throughout your code. For example, use Get for methods that are used to get values and Set as the beginning prefix for methods that are used to update a particular variable.
setNewUserName(oldUsername,newUsername) {
}
changeUserLocation(oldUsername,newUsername) {
}
When there is no consistent naming pattern in the code, it’s harder to maintain and you or your maintainers will not be able to see at a glance what the code modifies.
setNewUserName(oldUsername,newUsername) {
}
setUserLocation(oldUsername,newUsername) {
}
Adopting one of the many test design patterns such as Page Object Model, Factory or Facade can be help contribute to the maintainability of automated tests. If you’d like to dive deeper into code organization, Clean Code is a good place to start.
Testing design patterns that move the complexity of your tests into another facade and out of the tests are beneficial in ensuring that you won’t need to maintain the actual tests themselves and only the underlying facade. This insulates your tests and makes them more useable.
class ContactUS {
openGetHelpModal() {
click(getHelpModal);
}
closeGetHelpModal() {
click(this.closeHelpModal);
}
}
Automate One Browser First
Focusing on one browser in the beginning will allow you to put an emphasis on writing your first suite of tests. Usually, we try to first make sure that the underlying business logic works properly and that the tests are run reliably before we introduce complexity by adding more browsers.
For the vast majority of functional cases when a test works in one browser, it should work in others. With this in mind, you should start to focus your attention on one browser first.
Get the analytics for the most used browser on your application and use that browser as the candidate for your automation efforts.
If browser compatibility tests are needed for your application, select a subset of your tests that perform the actions that may break on different browsers and only run those tests on all browsers. This will help to reduce the run time of tests so that you can get results faster.
In closing
These tricks are beneficial in helping to reduce the flakiness in the automated regression tests and can assist you and your team in moving to a place where you can put more trust in them. Adopting these tricks will help you to increase reliability so that your team can put increased trust in the outcome of your automated regression tests.