Flaky tests undermine the reliability of automated testing processes, leading to potential delays in developing workflows and releases. Such inconsistencies can reduce confidence in testing frameworks, absorb crucial time meant for troubleshooting, and divert attention from genuine problems. Within the framework of CI/CD, where swift and dependable rollout of software is crucial.
This article delves into strategies for identifying and overcoming flakiness in Jest tests. By understanding the root causes and applying best practices, developers can enhance the stability and reliability of their test suites, ensuring that automated testing remains a good way of ensuring quality assurance in their software development lifecycle.
Overview of Jest as a Testing Tool
Jest, made by Facebook, is a JavaScript Testing Framework that’s all about simplicity. It’s great for React projects but also works well with other JavaScript setups. Its zero-configuration setup, instant feedback, and built-in coverage tool make Jest popular among developers.
Flaky tests are tests that produce inconsistent outcomes either passing or failing, without changes in the code. They are a significant pain point in software development, leading to mistrust in testing suites, wasted developer time, and potentially, the overlooking of genuine bugs. In Continuous Integration/Continuous Deployment (CI/CD) pipelines, flaky tests can cause unnecessary delays in deployments, affecting the overall software quality and delivery speed.
Flaky tests in Jest, as in other environments, often stem from non-deterministic behaviors within the tests or the code being tested. Examples include reliance on external services, timing issues, and improper handling of asynchronous operations.
What causes flaky tests when testing with jest?
Several things can cause flakiness in your tests. Let’s look at some of them with some examples.
Mocking the wrong selector
An example of this would be mocking the useSelector
in Redux-React component. This may affect the way that the component is rendered and causes flakiness.
The wrong way:
import { render, screen } from "@testing-library/react";
import MyComponent from "../MyComponent";
import store from "../../../path-to-AppStore.ts";
// Globally mocking useSelector hook
jest.mock("react-redux", () => ({
...jest.requireActual("react-redux"),
useSelector: jest.fn().mockReturnValue("Mock-Greeting"),
}));test("displays the mock greeting label", () => {
render(
<Provider store={store}>
<MyComponent />
</Provider>
); const greetingLabel = screen.getByText("Mock-Greeting");
expect(greetingLabel).toBeInTheDocument();
});
The first example demonstrates a common mistake when trying to mock Redux’s useSelector
hook. It replaces the useSelector
hook globally for all tests in the file with a mock function that always returns a fixed value (“Mock-Greeting”). The mock is hard-coded to return a specific value, reducing the test’s flexibility to handle different scenarios or state shapes.
The correct way:
// Correct approach using a mock store
import { render, screen } from "@testing-library/react";
import { Provider } from "react-redux";
import configureMockStore from "redux-mock-store";
import MyComponent from "../MyComponent";
const mockStore = configureMockStore();
const store = mockStore({
greeting: "Mock-Greeting", // Adjust this initial state to match your actual state structure
});test("displays the mock greeting label", () => {
render(
<Provider store={store}>
<MyComponent />
</Provider>
); const greetingLabel = screen.getByText("Mock-Greeting");
expect(greetingLabel).toBeInTheDocument();
});
Here are the key differences:
- Isolation of Mocks: In the correct approach, the mock is scoped to the test by creating a mock store with the desired state. This prevents the mock from affecting other tests and allows for more precise control over the test environment.
- Flexibility: The mock store can be customized for each test, allowing you to simulate different states and scenarios easily. This is more flexible than a global mock that returns the same value for all tests.
- Realistic Testing Environment: By using a mock store that mimics the Redux store’s behavior, the test more accurately reflects how the component interacts with Redux in a real application. This leads to more reliable and meaningful test results.
Hard coding wait timers
We can simulate a scenario where the “Loading…” text for a component disappears after a fixed but arbitrary delay that may or may not be adequately waited for in the test. This approach introduces non-determinism, as the test’s success could depend on timing specifics that aren’t consistently handled.
The wrong way:
import { render, screen, fireEvent, waitFor } from "@testing-library/react";
import Button from "../path-to-Button-component";
test("Button text changes incorrectly", async () => {
jest.useFakeTimers();
render(<Button initialText="Click Me" />); fireEvent.click(screen.getByRole("button"));
jest.advanceTimersByTime(500); // Assumes operation takes 500ms // This might fail if the actual loading time is longer than 500ms
expect(screen.getByRole("button")).toHaveTextContent("Clicked");
jest.useRealTimers();
});
The wrong approach uses Jest’s fake timers to artificially advance time, assuming the duration of the asynchronous operation. This method can lead to flaky tests if the assumed duration doesn’t match the actual time required for the operation.
Correct way:
// Correct approach
import { render, screen, fireEvent, waitFor } from "@testing-library/react";
import Button from "../path-to-Button-component";
test("Button text changes correctly", async () => {
render(<Button initialText="Click Me" />); fireEvent.click(screen.getByRole("button")); // Dynamically wait for the button text to change to "Clicked"
await waitFor(() =>
expect(screen.getByRole("button")).toHaveTextContent("Clicked")
);
});
The correct approach uses waitFor
from the React Testing Library to dynamically wait for the UI to update to the expected state. This method does not make assumptions about how long the asynchronous operation will take, making the test more reliable.
To ensure the test is not flaky and handles asynchronous behavior correctly, we should avoid using fixed timers and instead rely on Jest’s and Testing Library’s built-in mechanisms to wait for the expected conditions.
Wrong use of async in your Jest test.
Stop using the jest async
keyword In tests where no async event is happening.
This is wrong:
// Wrong approach:
test("incorrectly attempts to fetch user data without waiting", async () => {
fetchUserData("userId123").then((userData) => {
// This assertion might not be reached before the test completes.
expect(userData).toBeDefined();
});
// The test might end before the fetchUserData promise resolves.
});
The wrong approach demonstrates an asynchronous test where the async
keyword is used, but the test does not properly wait for the asynchronous operation (fetchUserData
) to complete before making assertions. This can lead to flaky tests or false positives where tests pass without actually validating the expected conditions.
Issues with This Approach:
- Premature Test Completion: The test may complete before the promise returned by
fetchUserData
is resolved, meaning the assertions inside the.then()
may not execute. - Unreliable Test Outcomes: Since the test does not wait for the asynchronous operation to finish, it can lead to unreliable outcomes, potentially missing failures.
Correct way:
// Correct approach:
test("correctly fetches user data", async () => {
const userData = await fetchUserData("userId123");
expect(userData).toBeDefined();
expect(userData.name).toEqual("John Doe");
});
The correct approach uses async
and await
together, ensuring that the test waits for the fetchUserData
operation to complete before proceeding with the assertions. This pattern guarantees that the asynchronous operation’s result is available for evaluation, making the test outcomes reliable and accurate.
Advantages of This Approach:
- Guaranteed Execution Order: When using
await
, the test will pause until the asynchronous operation finishes, ensuring that assertions are only executed afterward. - Reliable and Accurate Testing: This method offers a dependable way to test asynchronous operations by accurately waiting for completion, ensuring meaningful and correct assertions.
- Simplicity and Clarity: Utilizing
async
/await
makes tests easier to read and understand, clearly indicating asynchronous operations that must be awaited.
Problems with Non-deterministic Inputs/Outputs:
Flakiness occurs when tests rely on inputs or outputs that can unpredictably vary between runs. This variability can arise from external data sources, random data generation, or any inconsistent form of input across tests.
Consider a simple function getRandomNumber
that returns a random number between 1 and 10. Directly testing this function poses challenges due to its non-deterministic output.
Function implementation:
function getRandomNumber() {
return Math.floor(Math.random() * 10) + 1;
}
Bad Code Example: Using random data within tests without controlling the randomness.
test("returns a number between 1 and 10", () => {
const number = getRandomNumber();
expect(number).toBeGreaterThanOrEqual(1);
expect(number).toBeLessThanOrEqual(10);
});
The test uses Math.random()
directly, leading to different inputs for each test run, making the test outcome unpredictable. While the above test checks if the number is within the expected range, it doesn’t control the randomness, making it impossible to test for specific values reliably.
Good Code Example: Mocking or seeding random values to ensure consistency across test runs.
test("getRandomNumber returns a predictable random number", () => {
// Mock Math.random to always return 0.5
jest.spyOn(Math, "random").mockReturnValue(0.5);
const number = getRandomNumber();
// Since Math.random is mocked to return 0.5, the output of getRandomNumber will be predictable
expect(number).toBe(6); // (0.5 * 10) + 1 = 6 jest.spyOn(Math, "random").mockRestore();
});
In the correct approach, Math.random
is replaced with a mocked version that consistently returns a fixed value (e.g., 0.5
). This action ensures predictability and testability of the getRandomNumber
function’s output. By employing this technique, you can craft precise assertions regarding the function’s behavior, thereby guaranteeing deterministic and reliable tests. However, it’s crucial to restore the original behavior of Math.random
after the test to prevent the mock from impacting subsequent tests.
Problems with Time-Based Logic
Tests that depend on time-based logic can become flaky if they rely on real-time delays or the system clock. These tests may pass or fail depending on the execution speed of the test environment.
Bad Code Example: Using actual delays in tests.
test("flaky test with real delay", (done) => {
setTimeout(() => {
expect(true).toBeTruthy();
done();
}, 1000); // Waits 1 second
});
The test waits for a real-time delay using setTimeout
, making the test slow and its success dependent on the environment’s timing.
Good Code Example: Using Jest’s fake timers to simulate time passing.
test("reliable test with fake timers", () => {
jest.useFakeTimers();
setTimeout(() => {
expect(true).toBeTruthy();
}, 1000);
jest.runAllTimers(); // Simulates all timers running
});
The test uses Jest’s fake timers to simulate the delay, ensuring the test runs quickly and its outcome is independent of the environment’s timing.
Problems with Race Conditions
Race conditions occur when the outcome of a test depends on the sequence or timing of uncontrollable events, such as API calls or database operations. Tests with race conditions can unpredictably pass or fail.
Bad Code Example: Testing asynchronous operations without properly handling the execution order.
test("flaky test due to race condition", () => {
let value = 0;
asyncOperation().then(() => (value = 1));
expect(value).toBe(1); // May fail if asyncOperation hasn't completed
});
The test makes an assertion immediately after initiating an asynchronous operation, without waiting for it to complete. This can lead to the assertion being evaluated before the operation finishes, causing unpredictable test outcomes.
Good Code Example: Ensuring asynchronous operations complete before assertions.
test("reliable test avoiding race conditions", async () => {
let value = 0;
await asyncOperation();
value = 1;
expect(value).toBe(1);
});
The test awaits the completion of the asynchronous operation before making assertions, ensuring that the test outcome is deterministic and reflects the operation’s actual result.
Strategies for Identifying Flaky Tests
Tools like Jest’s --runInBand
can help identify flaky tests by running tests sequentially, reducing the chances of tests interfering with each other. Additionally, custom scripts or third-party tools can automate the process of running tests multiple times to spot inconsistencies.
Isolating flaky tests involves running them under varied conditions to understand their behavior better. Techniques include altering the execution order of tests, mocking external dependencies, and using Jest’s timers to control timing-based logic.
Best Practices for Writing Reliable Tests
- Deterministic Inputs and Outputs: Ensure tests have predictable inputs and outputs. Use mocking and stubbing to isolate the test environment from external dependencies.
- Proper Asynchronous Handling: Use Jest’s async testing features correctly to handle promises, async/await, and callbacks.
- Avoid Timing-Based Logic: Where possible, avoid logic that depends on timing or delays. Use Jest’s fake timers to simulate timers.
- Managing External Dependencies and StateMock external services and APIs to ensure tests do not rely on external factors. Reset the state before each test to prevent tests from affecting each other.
- Addressing Timing Issues and Race ConditionsUtilize Jest’s fake timers to control JavaScript timers in your tests, eliminating flakiness caused by timing issues. Ensure proper synchronization of asynchronous operations to avoid race conditions.
Note: Regular review and maintenance of the test suite are crucial. Periodically audit tests for flakiness, optimize test code, and update tests alongside the codebase changes.
Conclusion
Dealing with flaky tests in Jest brings about best practices, thoughtful test design, and leveraging tools and methods to address non-deterministic behavior. Embracing these approaches allows developers to maintain the reliability of their Jest tests, thereby enhancing the quality and resilience of their software projects.
Resources
Originally published at https://semaphoreci.com on April 2, 2024.