The Test Stability Report (TSR) Process Intelligence flow monitors test case results over a specified number of builds and identifies tests that frequently change statuses as unstable. The flow includes widgets that show the current state of your tests, as well as an interactive drill-down report that shows the test execution history and test status changes. In this section:
This workflow collects all test statuses for the last 10 builds and calculates a stability score for each test using an algorithm. The algorithm is based on the number of test status changes and how recent the changes occurred relative to the latest build. If the score exceeds the determined threshold, the test case is considered unstable. You can customize the algorithm, threshold, and number of builds used to determine the stability score.
The Test Stability DTP Workflow includes the following components:
Comparing builds from different branches is not currently supported. Tests from the development branch, for example, are considered different tests from master branch.
The flow is installed as part of the Process Intelligence Pack. See the Process Intelligence Pack installation instructions for details. After installing the flow, deploy the report to your DTP environment.
Double-click the Set Threshold and Max Build Count, Algorithm change node to access the configuration settings.
You can change the following settings:
flow.threshold
: The flow works on a scoring system. A change in test result from one build to the next affects the score. By default, if the score exceeds 0, the test is considered unstable. If test status fluctuation is expected, you can change the threshold to a higher value so that tests are considered stable despite a moderate change in status over the set number of builds. flow.maxBuildCount
: By default, TSR uses the 10 most recent builds as the sample size for determining test stability. You can change the maxBuildCount
value to include more or fewer builds. Using more builds will affect system performance.flow.algorithm
: The built-in algorithm is used to calculate test stability by default, but you can specify custom
in this field and program your own algorithm in the Calculate Stability (custom) function node.
To use the custom algorithm, you will also need to double-click the Switch Algorithm function node and specify the custom
algorithm.
After deploying the artifact, the Test Stability - Donut and Test Stability - Statistics widgets will be available in DTP under the Process Intelligence category. See Adding Widgets for instructions on adding widgets to the dashboard.
You can configure the following settings for both widgets.
Title | Enter a new title to replace the default title that appears on the dashboard. |
---|---|
Filter | Choose Dashboard Settings to use the dashboard filter or choose a filter from the drop-down menu (see Creating and Managing Filters for more information about filters in DTP). |
Last Run | The widget shows data for the last run based on test status. You can choose:
You can add multiple instances of the widget configured with different last run settings to create a more complete view of your test stability. |
The widget shows the percentage of stable tests for the last run. The colored segment around the widget represents the stable tests and the grayed-out segment represents the unstable tests.
You can perform the following actions:
This widget provides a summary of the test stability statistics in a chart. The chart shows the percentage of stable tests for the last run, number of tests, number of stable tests, and number of unstable tests.
Click on a row to open the Test Execution Report filtered according to the widget configuration and the clicked area (see Viewing the Test Execution History Report for additional information about the report).
This report shows the test results in the filter for the last 10 builds (default). You can filter the report according to test stability (All, Stable, or Unstable) and last run status (All, Passed, Failed, Incomplete, No Data).
This column shows the name of the test file containing the executed test case. Manual tests do not have associated test file names. You can click in a cell to highlight a test file name and use your keyboard to copy it to your operating system clipboard (Ctl + C or Command + C). This makes searching for the file in your test automation tool easier.
This column shows the name of the test case that was executed. You can click in a cell to highlight a test case and use your keyboard to copy it to your operating system clipboard (Ctl + C or Command + C). This makes searching for the test case in your test automation tool easier.
This column shows a grid of builds and test cases. Each row represents a test case and each cell represents the results for one build. The results are color-coded:
By default, the report loads Unstable results followed by Stable results. The second tier order within each stability category is Failed, Incomplete, Passed, and No Data. |
You can click on a cell to view the test in the Test Explorer. Test cases that do not have details in DTP will not drill down into the explorer because there would be nothing to show.
You can mouse over a cell to view the build information and test status.
This column shows the status of the test's stability.
This column enables you to define an action to take for the test case. Choose an action from the drop-down menu and the information about the test will be updated in the Test Explorer.