The Test Stability Report (TSR) Process Intelligence flow monitors test case results over a specified number of builds and identifies tests that frequently change statuses as unstable. The flow includes a widget that shows the current state of your tests, as well as an interactive drill-down report that shows the test execution history and test status changes. In this section:

Introduction

This workflow collects all test statuses for the last 20 builds and calculates a stability score for each test using an algorithm. The algorithm is based on the number of test status changes and how recent the changes occurred relative to the latest build. If the score exceeds the determined threshold, the test case is considered unstable. You can customize the algorithm, threshold, and number of builds used to determine the stability score.

The Test Stability DTP Workflow includes the following components:

  • Test Stability - Donut widget: This widget displays the percentage of tests that fall into various stabilization categories based on the most recent build.
  • Test Execution History report: This interactive drill-down report shows the test execution history and test status changes for the specified number of builds. 

Comparing builds from different branches is not currently supported. Tests from the development branch, for example, are considered different tests from master branch.

Requirements

  • DTP Enterprise Pack 5.3.3
  • Parasoft DTP 5.3.3

Installation

You can download and install  from the Parasoft Marketplace. See Downloading and Installing Artifacts.

Upgrading from a Previous Version

The Test Stability Report is the next evolution of Test Stability Index 2.0, but it is a standalone artifact and should be installed separately. There are several major differences between the artifacts, including (but not limited to):

  • TSR includes a Test Execution History report that provides targeted information about test stability.
  • You no longer need to deploy a custom processor to DTP in TSR. 
  • You no longer need to manually invoke the flow.
  • The widget configuration is much simpler and uses more of DTP's native dashboard elements.

If you were using Test Stability Index 2.0 or older, we recommend uninstalling those artifacts before installing the Test Stability Report. 

Importing and Deploying TSR

  1. Choose Import> Library> Process Intelligence> Test Stability Report> Test Stability Report from the actions menu in Extension Designer to import the artifact into a service.


     
  2. (Optional) By default, TSR calculates test stability based on the last 20 builds, but you can customize the algorithm (see Customizing the TSR Flow).
  3. Click Deploy

Customizing the TSR Flow

Double-click the Set Threshold and Max Build Count, Algorithm change node to access the configuration settings.

You can change the following settings:

  • flow.threshold: The flow works on a scoring system. A change in test result from one build to the next affects the score. By default, if the score exceeds 0, the test is considered unstable. If test status fluctuation is expected, you can change the threshold to a higher value so that test are considered stable despite a moderate change in status over the set number of builds. 
  • flow.maxBuildCount: By default, TSR uses the 20 most recent builds as the sample size for determining test stability. You can change the maxBuildCount value to include more or fewer builds. Using more builds will affect system performance.
  • flow.algorithm: The built-in algorithm is used to calculate test stability by default, but you can specify custom in this field and program your own algorithm in the Calculate Stability (custom) function node.

To use the custom algorithm, you will also need to double-click the Switch Algorithm function node and specify the custom algorithm.

Using the Test Stability - Donut Widget

This widget displays the percentage of tests that fall into various stabilization categories based on the most recent build.

Adding and Configuring the Widget

After deploying the artifact, the Test Stability - Donut widget will be available in the DTP under the Process Intelligence category. Test See Adding Widgets for instructions on adding widgets to the dashboard.

Choose a filter from the Filter drop-down menu to configure the widget (see Creating and Managing Filters for more information about filters in DTP).

Viewing the Widget

The widget's has color-coded segments that represent groups of tests organized into the following categories.

UnstableUnstable tests are represented by the dark red segment. Click on this segment to open a filtered view of the Test Execution History Report that shows all tests in the filter. See Viewing the Test Execution History Report.
Stable Passed

Tests that passed in the most recent run and have been identified as stable according to the algorithm are represented by the green segment. Click on this segment to open a filtered view of the Test Execution History Report that shows all passing tests that are considered stable. See Viewing the Test Execution History Report.

Stable FailedTests that failed in the most recent run and have been identified as stable according to the algorithm are represented by the light red segment. Click on this segment to open a filtered view of the Test Execution History Report that shows all failing tests that are considered stable. See Viewing the Test Execution History Report.
Stable IncompleteTests that did not complete in the most recent run and have been identified as stable according to the algorithm are represented by the yellow segment. Click on this segment to open a filtered view of the Test Execution History Report that shows all incomplete tests that are considered stable. See Viewing the Test Execution History Report.
Stable No DataTests that are considered stable, but did not report a status during the most recent test run are represented by the gray segment. Click on this segment to open a filtered view of the Test Execution History Report that shows all tests with no data in the last run that are considered stable. See Viewing the Test Execution History Report.

The center of the widget shows the percentage of stable tests relative to all tests that ran in the most recent build. Click the center of the widget to open an unfiltered instance of the Test Execution History Report.

Stability categories and the last test run

The Stable category in the widget is based on the status of the last test run. The Unstable category shows all unstable tests regardless of last run status.

Viewing the Test Execution History Report

This report shows all test results in the filter for the last 20 builds (default).

The Test Status column shows a grid of builds and test cases. Each row represents a test case and each cell represents the results for one build. The results are color-coded:

  • Green cells represent passed tests
  • Red cells represent failed tests
  • Yellow cells represent incomplete tests
  • Gray cells represent builds where no results for the test were reported

About the default order for the report data

By default, the report loads Unstable results followed by Stable results. The second tier order within each stability category is Failed, Incomplete, Passed, and No Data.

You can perform the following actions:

  • Choose a stability level from the Stability drop-down menu to filter by stable, unstable, or all test cases. 
  • Choose a status from the Last Run drop-down menu. You can filter by tests that passed, failed, were incomplete, or had no data in the last run.
  • Mouse over a cell to view the build information and test status. 

  • Click on a cell to view the test in the Test Explorer.
  • Define an action to take for the test case from the drop-down menu in the Action column. Specifying an action updates the information about the test in the Test Explorer.




  • No labels