The Test Stability Report (TSR) Process Intelligence flow monitors test case results over a specified number of builds and identifies tests that frequently change statuses as unstable. The flow includes widgets that show the current state of your tests, as well as an interactive drill-down report that shows the test execution history and test status changes. In this section:

Introduction

This workflow collects all test statuses for the last 10 builds and calculates a stability score for each test using an algorithm. The algorithm is based on the number of test status changes and how recent the changes occurred relative to the latest build. If the score exceeds the determined threshold, the test case is considered unstable. You can customize the algorithm, threshold, and number of builds used to determine the stability score.

The Test Stability DTP Workflow includes the following components:

  • Test Stability - Statistics widget: This widget provides a summary of test statuses for the last run and selected filter. 
  • Test Stability - Donut widget: This widget displays the percentage of tests that fall into various stabilization categories based on the most recent build.
  • Test Execution History report: This interactive drill-down report shows the test execution history and test status changes for the specified number of builds. 

Comparing builds from different branches is not currently supported. Tests from the development branch, for example, are considered different tests from master branch.

Requirements

  • Parasoft DTP and DTP Enterprise Pack 2020.1
  • Test execution data from one of the following Parasoft tools:
    • SOAtest version 9.10.7 or later
    • C/C++test Standard, dotTEST, or Jtest 10.4.3 or later
    • C/C++test Professional 10.4.3 or later 

Installation

The flow is installed as part of the Process Intelligence Pack. See the Process Intelligence Pack installation instructions for details. After installing the flow, deploy the report to your DTP environment. 

  1. If you have not already done so, install the Process Intelligence Pack.
  2. Open Extension Designer and click the Services tab.
  3. Expand the Process Intelligence Engine service category. You can deploy assets under any service category you wish, but we recommend using the Process Intelligence Engine category to match how Parasoft categorizes the assets. You can also click Add Category to create your own service category (see Working with Services for additional information).
  4. You can deploy the artifact to an existing service or add a new service. The number of artifacts deployed to a service affects the overall performance. See Extension Designer Best Practices for additional information. Choose an existing service and continue to step 6 or click Add Service.
  5. Specify a name for the service and click Confirm.
  6. The tabbed interface helps you keep artifacts organized within the service. Organizing your artifacts across one or more tabs does not affect the performance of the system. Click on a tab (or click the + button to add a new tab) and choose Import from the vertical ellipses menu.
  7. Choose Library> Workflows> Process Intelligence> Test Stability Report and click Import.
  8. Click anywhere in the open area to drop the the artifact into the service.
  9. Click Deploy to finish deploying the artifact to your DTP environment.
  10. Return to DTP and refresh your dashboard. You will now be able to add the related widgets.

Customizing the TSR Flow

Double-click the Set Threshold and Max Build Count, Algorithm change node to access the configuration settings.

You can change the following settings:

  • flow.threshold: The flow works on a scoring system. A change in test result from one build to the next affects the score. By default, if the score exceeds 0, the test is considered unstable. If test status fluctuation is expected, you can change the threshold to a higher value so that tests are considered stable despite a moderate change in status over the set number of builds. 
  • flow.maxBuildCount: By default, TSR uses the 10 most recent builds as the sample size for determining test stability. You can change the maxBuildCount value to include more or fewer builds. Using more builds will affect system performance.
  • flow.algorithm: The built-in algorithm is used to calculate test stability by default, but you can specify custom in this field and program your own algorithm in the Calculate Stability (custom) function node.

 

To use the custom algorithm, you will also need to double-click the Switch Algorithm function node and specify the custom algorithm.

Adding and Configuring the Widgets

After deploying the artifact, the Test Stability - Donut and Test Stability - Statistics widgets will be available in DTP under the Process Intelligence category. See Adding Widgets for instructions on adding widgets to the dashboard.

You can configure the following settings for both widgets.

TitleEnter a new title to replace the default title that appears on the dashboard.
FilterChoose Dashboard Settings to use the dashboard filter or choose a filter from the drop-down menu (see Creating and Managing Filters for more information about filters in DTP).
Last Run

The widget shows data for the last run based on test status. You can choose:

  • All to include all tests in the last run.
  • Passed to include only the tests that passed in the last run.
  • Failed to include only the tests that failed in the last run.
  • Incomplete to include only the tests that did not complete in the last run.
  • No Data to include only the tests that did not report a status.

You can add multiple instances of the widget configured with different last run settings to create a more complete view of your test stability.

Viewing the Test Stability - Donut Widget

The widget shows the percentage of stable tests for the last run. The colored segment around the widget represents the stable tests and the grayed-out segment represents the unstable tests. 

 

You can perform the following actions:

Viewing the Test Stability - Statistics Widget

This widget provides a summary of the test stability statistics in a chart. The chart shows the percentage of stable tests for the last run, number of tests, number of stable tests, and number of unstable tests.

Click on a row to open the Test Execution Report filtered according to the widget configuration and the clicked area (see Viewing the Test Execution History Report for additional information about the report). 

Viewing the Test Execution History Report

This report shows the test results in the filter for the last 10 builds (default). You can filter the report according to test stability (All, Stable, or Unstable) and last run status (All, Passed, Failed, Incomplete, No Data). 

File Name Column

This column shows the name of the test file containing the executed test case. Manual tests do not have associated test file names. You can click in a cell to highlight a test file name and use your keyboard to copy it to your operating system clipboard (Ctl + C or Command + C). This makes searching for the file in your test automation tool easier.

Test Case Column

This column shows the name of the test case that was executed. You can click in a cell to highlight a test case and use your keyboard to copy it to your operating system clipboard (Ctl + C or Command + C). This makes searching for the test case in your test automation tool easier.

Test Status column

This column shows a grid of builds and test cases. Each row represents a test case and each cell represents the results for one build. The results are color-coded:

  • Green: passed tests
  • Red: failed tests
  • Yellow: incomplete tests
  • Gray: no results for the test were reported in the build

About the default order for the report data

By default, the report loads Unstable results followed by Stable results. The second tier order within each stability category is Failed, Incomplete, Passed, and No Data.

You can click on a cell to view the test in the Test Explorer. Test cases that do not have details in DTP will not drill down into the explorer because there would be nothing to show.

You can mouse over a cell to view the build information and test status. 

Stability Column

This column shows the status of the test's stability.

Action Column

This column enables you to define an action to take for the test case. Choose an action from the drop-down menu and the information about the test will be updated in the Test Explorer.

  • No labels