Testing Tools to Identify Flaky Tests

Semaphore
13 min readMay 8, 2024

--

A crucial step in the software creation process is effective testing. It ensures the programs’ general quality, security, and usefulness. Testing teams do, however, have a number of difficulties, one of which is finding faulty tests. Sometimes these tests fail for no obvious reason at all, which confuses people and impedes the progress of development. These tests are called flaky tests. This article covers various testing tools that can help you spot, control, and minimize the effect of flaky tests.

Challenges in Identifying Flaky Tests

In software testing, random test failures may be very frustrating. Flaky tests are unexpected since they can pass on one try but fail on another. These inconsistencies might be caused by difficult-to-replicate external dependencies, complex links between many code parts, or timing problems. Finding incorrect tests becomes more difficult when several tests are run simultaneously, application behavior changes, and unexpected test execution conditions occur.

The asynchronous and dynamic loading of web application components makes the job more difficult. As a result, it is difficult to create a reliable and consistent testing environment and find the root cause of random test failures. This is when certain testing tools become important. By resolving these problems, you may improve the dependability of your test results using these tools. They allow testing teams to identify, control, and mitigate the effect of irregular tests on the overall testing strategy. You can read more about the best practices to help mitigate it here

Tools for Detecting and Managing Flaky Tests

Several testing tools can simplify the process of finding and handling flaky tests. Let’s explore some famous options:

TestNG

TestNG, a famous Java testing framework, includes built-in features for handling flaky tests. It uses listeners and analyzers to closely watch test runs, discover problems, and properly group tests.

  • Listeners: These are important for recording events during a test run. They offer hooks for running custom code before or after defined test events. Listeners allow you to create custom rules for identifying and handling unreliable tests.
  • Analyzers: TestNG analyzers have a built-in way for rerunning failed tests, which helps decrease flakiness. The analyzer checks the original test results, finds failures, and selectively retries the failed tests based on predefined criteria.

Adding TestNG to your project

Here is a step-by-step guide on installing TestNG in IntelliJ IDEA (it comes bundled with TestNG 7.1.0)

  • For Maven users:
  1. Navigate to your project’s root directory.
  2. In pom.xml, hit Alt Insert and choose Dependency.
  3. In the window that appears, enter testNG into the search area.
  4. Locate the org.testng:testing dependency, choose its version from the search results, and then click Add.
  5. After adding the dependency to pom.xml, import the modifications by pressing Ctrl Shift + O or clicking the refresh icon to import the changes.
  • For Gradle users:
  1. Open the build.gradle in your project’s root location.
  2. Press Alt Insert in build.gradle and choose Add Maven artifact dependent.
  3. Enter testNG into the search area of the window that opens.
  4. Find the requirement for org.testng:testing, choose its version from the search results, and click Add.
  5. Once the dependent has been added to build.gradle, import the changes by pressing Ctrl Shift O or refresh icon to import changes
  • For IntelliJ build tool:
  1. Select File then Project Structure from the main menu or use Ctrl Alt Shift S
  2. Under Project Settings, choose Libraries, and then click the New Project Library from Maven option.
  3. In the dialog that appears, specify the required library asset, for example: org.testng:testng:6.14.3.
  4. Apply your changes and close the dialog.

Creating a new TestNG class

The simplest way to make a new test class in IntelliJ IDEA is to use a special intention action that can be called from the source code. In this case, the IDE makes a new test class and produces test code for it, the package, or function. Here are steps to take:

  • Place the caret at the class for which you wish to build a test in your production code in the editor, hit Alt Enter, and choose build Test.
  • Choose the library you wish to use from the Create Test dialog box. It will ask you to download the required library if you don’t already have it. Click Fix to accomplish it.
  • The IDE will add the missing dependencies to your pom.xml if you’re using Maven. Manually add the required dependencies for projects using Gradle.
  • Set the name and location of the test class and choose the methods you wish to test. Press OK.
  • Consequently, a new test class with the given name and produced test methods are created under the Test Sources Root by IntelliJ IDEA.

Running TestNG tests Executing TestNG tests within IntelliJ IDEA is straightforward:

  • Individual Test: Click the “Run” icon in the gutter next to the desired test.
  • Entire Test Class: Right-click the test class name and select “Run” -> “Run.” The “Run” tool window will display the test results for your evaluation.

JUnit Flaky Test Plugin

The JUnit Flaky Test Plugin is an extremely useful tool for developers working with JUnit test packages. It handles a prevalent problem which is flaky tests. These are tests that give inconsistent results, with some passing and others failing for no obvious reason. This difference can hinder development progress by wasting time studying mistakes that do not suggest true issues. JUnit 5 is the latest version of the JUnit testing system, offering a contemporary base for developer-side testing on the Java Virtual Machine (JVM). This includes emphasis on Java 8 and higher, as well as allowing for a range of testing methods. Here is how to plan and perform tests.

Creating Project

  1. Navigate to file, select new and project from the main menu.
  2. Choose Java from the left selection in the New Project wizard.
  3. Give the project a name, such as `junit-tutorial , and choose Maven, Gradle or IntelliJ as the build tool.
  4. Choose the JDK that you wish to utilize for your project from the list of JDKs.
  5. Choose Add JDK and enter the path to the JDK home directory if the JDK is installed on your machine but is not defined in the IDE.
  6. Click Download JDK if your machine is missing the required JDK.
  7. Press the Create button.

Add dependency

We must add JUnit as a dependency in order for our project to leverage JUnit functionalities.

  • For Maven users:
  • In your project’s root directory, open pom.xml.
  • Press Alt Insert in pom.xml and choose Dependency.
  • Enter org.junit.jupiter:junit-jupiter in the search box of the dialog that appears.
  • In the search results, choose the required dependency and select Add.
  • Once the dependency has been added to pom.xml, import the modifications by pressing Ctrl Shift O or selecting refresh to import changes.
  • For Gradle users:
  • Launch the build.gradle in your project’s root directory. Ctrl Shift 0 may be used to swiftly go to a file by entering its name.
  • Press Alt Insert in build.gradle and choose Add Maven artifact dependency.
  • Enter org.junit.jupiter:junit-jupiter in the search field of the tool window that appears.
  • In the search results, choose the required dependency and select Add.
  • Once the dependent has been added to build.gradle, import the modifications by pressing Ctrl Shift O or clicking refresh to import changes
  • For IntelliJ build tool:
  • Select File, Project Structure (Ctrl Alt Shift S) from the main menu.
  • Choose Libraries from the Project Settings menu, then click the New Project Library from Maven button.
  • In the resulting dialog box, choose the required library artifact, such as: org.junit.jupiter:junit-jupiter:5.9.1.
  • After making your adjustments, close the dialog box.

Creating Sample Code and Tests

In the Project tool window go to src/main/java and create a Java file called Calculator.java.

import java.util.stream.DoubleStream;
public class Calculator {    static double add(double... operands) {
return DoubleStream.of(operands)
.sum();
}
static double multiply(double... operands) {
return DoubleStream.of(operands)
.reduce(1, (a, b) -> a * b);
}
}

Right-click on “Calculator” in the Project tool window and select “Create Test.” Choose the two methods you want to test (add and multiply).

The editor takes you to the newly created test class. Modify the add() test as follows:

@Test
@DisplayName("Add two numbers")
void add() {
assertEquals(4, Calculator.add(2, 2));
}

This short test will determine whether our algorithm adds two and two correctly. The test has a more practical and useful name thanks to the @DisplayName annotation. What would happen if you wanted to include more than one statement in a single test and run it all even if some of the assertions didn’t pass? The assertAll() function accepts a list of lambda expression assertions and checks that each one has been verified. Seeing a specific result instead of the test’s overall result is always more handy than having several single assertions.

Run tests

After we have set up the code for the testing, we can run the tests and find out if the tested methods are working correctly. Running individual test, click run in the gutter. To execute all tests in a test class, click execute against the test class declaration and then choose Run. The final result can be seen below:

RSpec’s –bisect Option

A potent method called –bisect is given by the well-known Ruby testing framework RSpec to overcome inconsistent test failures. When tests only fail after finishing particular tests before them, a situation that might be difficult to fix using standard methods, this flag becomes extremely important.

A repeating method is invoked by the --bisect option to find the source of test flakiness. It starts by running your whole test suite, dividing it in half carefully and running each half separately. Subdivision continues until the smallest set of tests successfully reproduces the mistake in the half that still shows the failure.

Practical Example Consider a Ruby test suite with ten files (spec/calculator_1_spec.rb to spec/calculator_10_spec.rb), and one of them occasionally fails. You assume the failure is caused by the execution order, but you can’t determine the specific combination.

Using --bisect:

  1. Open your terminal and navigate to your Ruby project directory.
  2. Run the following command to start the bisect process:
rspec --seed 1234 --bisect

--seed 1234: This guarantees that tests are executed in a consistent sequence during the bisect process. RSpec will provide findings that include rounds of bisection, eventually narrowing down the failed samples. Here is a sample output:

Round 1: bisecting over non-failing examples 1-9 .. ignoring examples 6-9
Round 2: bisecting over non-failing examples 1-5 .. ignoring examples 4-5
Round 3: bisecting over non-failing examples 1-3 .. ignoring example 3
Round 4: bisecting over non-failing examples 1-2 .. ignoring example 1
Bisect complete! Reduced necessary non-failing examples from 9 to 1.

Success! RSpec has identified the minimal set causing the failure. The final output provides a minimal reproduction command that includes only the necessary failing specs. This command below allows you to re-run the failing tests in isolation, making it easier to diagnose the root cause.

rspec ./spec/calculator_10_spec.rb [1:1] ./spec/calculator_1_spec.rb [1:1] --seed 1234

You can use Ctrl-C to end the bisect operation at any time. You can test it with projects here

Visual Regression Testing Tools

Visual Regression Testing (VRT) tools are crucial in detecting even the slightest visual differences that could be overlooked during conventional testing. In this chapter, we will review Percy and Applitools, two prominent players in the VRT industry. Both platforms provide unique benefits that are worth exploring.

Overview of Percy’s Visual Testing Capabilities

The purpose of Percy, an automated visual testing tool, is to identify visual inconsistencies in online applications. It helps teams ensure the visual excellence of their user interface by capturing and comparing screenshots of the application’s visual state during testing. Jest, Cypress, and Selenium are just a few of the well-known testing frameworks that Percy easily interfaces with. Additionally, it works with CI/CD systems like GitHub Actions and Jenkins. It creates baseline images of the user interface for your programme and notifies you of any differences by comparing subsequent test runs with these baselines.

Integrating Percy with Jest:

  • Begin by installing the Percy package in your project.
npm install --save-dev @percy/puppeteer
  • Set up Percy in your Jest configuration.
// jest.config.js
module.exports = {
// ... other Jest configurations
setupFilesAfterEnv: ['<rootDir>/percy.setup.js'],
};
  • Create a percy.setup.js file with the following content:
// percy.setup.js
const { percySnapshot } = require('@percy/puppeteer');
// Ensure Percy is running
beforeAll(async () => {
await page.goto('http://localhost:5338');
});
// Take snapshots using Percy
test('Snapshot', async () => {
await percySnapshot(page, 'Snapshot');
});
  • Run Jest with Percy: Execute your Jest tests with Percy.
npx percy exec -- jest

The integration with Percy enhances your visual testing strategy, providing a comprehensive view of your application’s UI changes.

Integration with Selenium:

If you’re working with Java and Selenium, you can use the percy-java-selenium library in your project. Add it to your Maven project dependencies

<dependency>
<groupId>io.percy</groupId>
<artifactId>percy-java-selenium</artifactId>
<version>1.3.0</version>
</dependency>

A sample of test code in Java for taking snapshot:

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import io.percy.selenium.Percy;
public class Example {
public static void main(String[] args) {
// Create a new WebDriver instance using ChromeDriver
WebDriver driver = new ChromeDriver();
// Navigate to the specified URL (in this case, "https://example.com")
driver.get("https://example.com");
// Initialize Percy with the WebDriver instance
Percy percy = new Percy(driver);
// Take a snapshot of the current page with a descriptive name
percy.snapshot("Java example");
}
}

In short, this code sets up Percy, starts “https://example.com“, sets up a Chrome WebDriver, and takes a picture for testing and comparison. Percy will keep this sample for future use as a guide and for checking for bugs. Keep in mind that you should use the real URL of the site you want to test instead of “https://example.com.” You can test it with various Java projects here

Applitools Eyes: Visual AI Testing

One of the most complete Visual AI testing tools is Applitools Eyes, which has a lot of options for finding UI issues. Users pay based on how many tests they run and the services they need, using a commercial price plan. Test scripts for Applitools Eyes can be written in Python, Java, and JavaScript, among other languages. The Applitools SDK is used to compare snapshot that were taken while tests were running. In particular, Applitools Eyes cuts down on false results by using AI and machine learning. It ensures accurate test results by instantly finding and disregarding changing visual aspects.

To start using Applitools Eyes for Visual AI Testing, follow these steps:

  • Sign Up for an Account: Visit the Applitools sign-up page to create a new account.
  • Login to Applitools Dashboard: Once registered, log in to the Applitools dashboard.
  • Set Up the SDK in Your Project: To start using Applitools, you’ll need to install and configure the Applitools SDK. The SDK is available for popular programming languages and frameworks and can be easily integrated to existing automation test frameworks. Each Applitools account has a secret API key. Visual tets require the API key to authenticate the account and upload test results to the Applitools cloud and connect the results to that account
  • Write Your First Visual Test: Using Applitools Eyes, create a test script that defines visual checkpoints. This entails taking screenshots of your application and confirming their accuracy.

Example (Java with Selenium):

import com.applitools.eyes.selenium.Eyes;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
public class YourVisualTest { public static void main(String[] args) {   //  Initialize the WebDriver for Chrome browser (you can modify this to use a different browser)
WebDriver driver = new ChromeDriver();
// Create an Eyes object to perform visual testing
Eyes eyes = new Eyes();
// Set your Applitools API key to enable communication with the platform (replace with your actual key)
eyes.setApiKey("Your_Applitools_API_Key");
try { // Start the visual test by providing the application name and test name of your choice
eyes.open(driver, "Your App Name", "Your Test Name");
// Navigate to the URL of your web application under test
driver.get("Your App URL");
// Capture a visual checkpoint of the entire page.
// Consider using more specific selectors for targeted testing in the future
eyes.checkWindow("Full Page");
} finally { // Always close the WebDriver to avoid resource leaks
driver.quit();
// If the test was aborted due to an exception, abort the Eyes test as well
eyes.abortIfNotClosed();
}
}
}
  • Run Your Visual Test: When you run your test script, Applitools will take screenshots of your application and compare them to a baseline.
  • Review Results in the Dashboard: Return to the Applitools dashboard to see the visual tests’ outcomes. It will draw attention to any variations between your application’s baseline and current states. Update the baseline in the Applitools dashboard to reflect the new intended visual design as your application develops.

Adding visual testing to your workflow guarantees a visually consistent and enjoyable user experience, regardless of whether you want the powerful AI-driven analysis offered by Applitools or the seamless integration, collaboration, and ease of use offered by Percy.

Conclusion

This article discusses the challenges faced by flaky tests in software development, with a particular emphasis on the role of testing tools in resolving these issues. To study more of this tools and practical experiences in large-scale commercial settings and investigating technologies such as TestNG, JUnit, and Percy, check out this in-depth study. It becomes clear that the testing tools used have a substantial impact on test suite dependability. The study emphasizes the ongoing need for research and innovation in testing technologies to improve their ability to detect and manage flaky tests. As the software development community grows, a concentrated effort to refine and advance testing tools is critical for establishing confidence, assuring reliability, and successfully negotiating the complexities of flaky testing scenarios.

Originally published at https://semaphoreci.com on May 8, 2024.

--

--

Semaphore
Semaphore

Written by Semaphore

Supporting developers with insights and tutorials on delivering good software. · https://semaphoreci.com

No responses yet