This topic covers how to analyze and address C/C++test’s test execution results.
Sections include:
After test execution, C/C++test generates a prioritized task list organized by error categories and severities. For tests run in the GUI, tasks are organized into the following categories in the Quality Tasks view:
For tests run from the command line interface, tasks are reported in the Test Generation and Test Execution section of the report. Results imported to your IDE as described in the Importing Results into the UI will also appear in the Quality Tasks view.
You can view Test configurations that trigger tasks by right-clicking on a task in the Quality Tasks view and choosing View Test Configuration.
Quickly accessing test configuration from the from the Quality Tasks view is useful for group architects who are customizing tests and want to quickly disable settings that aren’t applicable. Developers importing results from a server-based run may also need to open and review test configurations that trigger tasks.
For each unit testing problem reported, C/C++test reports the stack trace for the test case that caused the problem.
To review one of the lines of code referenced in the stack trace, double-click the node that shows the line number, or right-click the node and choose Go to from the shortcut menu. The editor will then open and highlight the designated line of code.
The results of postcondition macros (all the asserted values reported) are displayed in the Quality Tasks view. These postconditions capture the state of test objects or global variables used in the test.
Any test case with reported post conditions can automatically be validated for use in regression testing. Verification changes the *_POST_CONDITION_* macros into assertions that will fail if a subsequent test does not produce the expected (validated) value. This is especially useful for automated generation of a regression base for legacy code. For details on verification, see Verifying Test Cases for Regression Testing.
Within each category, tasks are organized according to severity to help you identify and focus on the most serious issues.
If you want the Quality Tasks view to display the severity of each task, go to the pull-down menu in the Quality Tasks view, then choose Configure Contents and set it to show Severity |
The Test Case Explorer helps you manage a project’s test cases, test suites, and related data sources. It provides detailed test statistics (executed/passed/failed/skipped) and allows you to search/filter the test case tree.
By default, the Test Case Explorer is open in the left side of the UI. If it is not available, you can open it by choosing Parasoft> Show View> Test Case Explorer.
For details on the Test Case Explorer, see the Exploring the C++test UI.
To view test case details, enable the Show> Details option from the Test Case Explorer menu.
The following information will be displayed in the Test Case Explorer tree:
To open the source code for a test suite or test case in the Test Case Explorer, right-click its Test Case Explorer node, then choose Open. Or, double-click its Test Case Explorer node.
To open the test results (if available) for a test case in the Test Case Explorer, right-click its Test Case Explorer node, then choose Show in Quality Tasks. You can also perform the reverse action—to see the Test Case Explorer node that correlates to a result in the Quality Tasks view, right-click the Quality Tasks view node, then choose Show in Tests.
To view detailed information about executed data source test cases, enable Show > Data source tests from the Test Case Explorer menu. This will make C/C++test display information about each executed iteration of the data source test case.
The data source test case element of the tree presents statistics information about executed iterations (e.g., number of passed and failed iterations). Statistics shown for all parent nodes of the data source test case also include information about particular data source test case iterations.
There are several ways to create a report containing the maximum amount of unit test execution and traceability information:
From here... | Perform these steps... |
---|---|
Generating test cases automatically |
|
Creating test cases using Test Case Wizard |
|
Executing tests |
|
Generating reports |
|
Generating reports from the command line |
|
Different types of findings require different response strategies. The following table lists the categories used to classify C/C++test’s test execution findings, and links to sections that will help you understand and respond to them.
Category | Subcategory | Description and Recommended Response |
---|---|---|
Fix Unit Test Problems | Assertion Failures | See Assertion Failures and Timeouts |
Runtime Exceptions | See Runtime Exceptions | |
Review Unit Test Outcomes | Unverified Outcomes | See Unverified Outcomes |
The Quick Fix (R) feature can be used to automate actions commonly performed while reviewing and responding to unit test findings.
To use Quick Fix to respond to a test execution finding:
The following section explains the available Quick Fixes. Note that the Quick Fixes available for a specific task depends on the nature of the task.
If you want to apply the same action to multiple reported problems:
The Test Execution Details report is an additional report you can generate from your regular report. It contains details about the following information:
See Generating the Test Execution Details Report for details.