In this chapter:
Introduction
Risky Code Changes calculates metrics associated with code changes and displays them DTP widgets and reports. Risky Code Changes derives three different metrics on a per file basis:
- test score
- maintainability score
- risk score
The metrics are derived according to the following calculations:
- Test Deficit – 100 – (# Tests / 2 * # Methods * 100 (capped at 100) + 4 * Coverage (percentage * 100) / 5
- Maintenance Burden – 200 – Maintainability Index (capped at 200)
- Quality Debt – (Severity 1 and 2 violations + # Test Failures)/ Logical lines of code
Once the scores have been computed, they can be displayed in various formats depending on the mode parameter sent in the request.
Resubmit Data if Updating Risky Code Changes
If you were using Risky Code Changes 2.0.0 or older and have upgraded from DTP 5.3.1 or older, you must send a new test and coverage report to DTP server and set it as a baseline build when configuring the widget.
You can resubmit a previous build report to the data collector to populate the data. Once the data is in DTP, make sure to archive the build so that it won't be removed during normal database clean up (see Locking and Archiving Builds).
Requirements
- The following metrics should be enabled in the code analysis tool (see Metrics Tab for information on enabling metrics and for metrics documentation) for the DTP filter:
- METRIC.MI (Maintainability Index)
- METRIC.NOLLOCIF (Number of Logical Lines in Files)
- METRIC.NOMIT (Number of Methods in Types)
- The DTP filter must be configured with Run Configurations that contain static analysis, metrics, test, and coverage data. If the builds configured in this widget do not have static analysis, metrics, test, and coverage data, the Risky Code Changes logic will not function. See Associating Coverage Images with Filters.
You can confirm that the filter and build meet these requirements by checking the Build Administration page in DTP:
Installation
Risky Code Changes is installed as part of the the Process Intelligence Pack installation (refer to the Process Intelligence Pack installation instructions for details). After installing the pack, the assets that enable Risky Code Changes functionality must be deployed to your DTP environment.
- If you have not already done so, install the Process Intelligence Pack.
- Open Extension Designer and click the Services tab.
- Expand the Process Intelligence Engine service category. You can deploy assets under any service category you wish, but we recommend using the Process Intelligence Engine category to match how Parasoft categorizes the assets. You can also click Add Category to create your own service category (see Working with Services for additional information).
- You can deploy the artifact to an existing service or add a new service. The number of artifacts deployed to a service affects the overall performance. See Extension Designer Best Practices for additional information. Choose an existing service and continue to step 6 or click Add Service.
- Specify a name for the service and click Confirm.
- The tabbed interface helps you keep artifacts organized within the service. Organizing your artifacts across one or more tabs does not affect the performance of the system. Click on a tab (or click the + button to add a new tab) and choose Import from the vertical ellipses menu.
- Choose Local> Flows> Workflows> Process Intelligence> Risky Code Changes and click Import.
- Click anywhere in the open area to drop the the artifact into the service.
- Click Deploy to finish deploying the artifact to your DTP environment.
- Return to DTP and refresh your dashboard. You will now be able to add the related widgets.
Profile Configuration
The Risky Code Changes slice ships with a "demo" model profile that defines how thresholds are visualized in its associated widgets and reports. The slice calculates the data, but the profile determines the markers.
The demo profile is a fully-configured example that includes three risk level thresholds:
- low
- medium
- high
See Working with Model Profiles for information on configuring model profiles.
Caching the Data
Because you can run the Risky Code Changes slice over extended periods of time, the slice includes a caching mechanism to speed up multiple requests for the same data. When data is requested for a filter, the slice first determines if the data is already computed and cached. If the cache exists, the data is returned directly and the lengthy computation is skipped. The cache is cleared and recomputed on the fly, however, if the data is not cached, if the cached data is for a different build combination, or if more analysis data has been reported to the build combination, then the cache is cleared and recomputed on the fly. There is a maximum of one cache per filter and combination of baseline and target build.
Clearing the Cache
Because the slice does not automatically remove cached data, the cache can grow as more filters are introduced. To help you clear out old cache data, the slice provides a way to delete all cached calculations from the PIE database. This flow cleans all cached calculations. The cache also clears at 00:00 every day. You can configure the auto cache clearing setting by editing the Clean Cache inject node.
Widget Configuration
Risky Code Changes widgets are added to the Process Intelligence category. See Adding Widgets for details on adding widgets.
You can configure the following widget settings:
Title | You can retitle the widget or use the default title. |
---|---|
Filter | Choose Dashboard Settings or a specific filter from the drop-down menu. See Creating and Managing Filters for additional information. |
Period | Choose Dashboard Settings or a specific period from the drop-down menu. The period is a span of time or code drops. |
Baseline Build | The baseline build is the first set of data points presented in the widget. You can choose Dashboard Settings, First Build in Period, or Previous Build. |
Target Build | The target build is the last set of data points presented in the widget. You can choose Dashboard Settings or Last Build. |
Profile | A profile is required for determining thresholds and how they should be visualized. See Profile Configuration for additional information. |
Coverage Image | Coverage images are identifiers for the coverage data associated with a test run. See Associating Coverage Images with Filters for additional information. |
Check Build Administration to Get the Correct Build
By default, Baseline Build is set to Previous Build and Target Build is set to Latest Build when you configure the widgets. The slice will automatically select the two most recent builds, but these builds may not contain test and coverage details. You should check the Build Administration page in DTP and use an appropriate baseline and target build when configuring the widget as described in the Requirements section. Also see Build Administration.
Viewing Widgets and Reports
The Risky Code Changes slice includes the following widgets that can be added to DTP after installation and deployment.
Risky Code Changes - Pie Chart
This widget shows risky code change aggregates (e.g., the number of files that fall into the various risk levels defined in the profile).
Risky Code Changes - Bubble
This widget shows the risky code changes per file. The test score is mapped to the X-Axis, the maintainability score is mapped to the Y-Axis, and the risk score is mapped to the bubble size.
Drill-down Reports
Click on a section of the pie chart widget or on a bubble in the bubble chart widget to open the Risky Code Changes drill-down report. The report will be filtered according to the risk level of the data you clicked in the widget.
The header of the report includes links to an explorer view filtered according to the search parameters specified in the report.
The Maintenance Burden column shows the Maintainability Index value calculated for with the file. Click on a link in this column to view the file's Maintainability Index metric values in the Metrics Explorer.
The Test Deficit column shows the number of tests covering the file, the number of methods used in the metrics calculation, and the level of code coverage. A Test Deficit score is calculated based on these values. You can perform the following actions:
- Click on the Tests link to view the tests in the Test Explorer.
- Click the Methods link to view the methods in the Metrics Explorer.
- Click the Coverage link to view the code coverage for the file in the Coverage Explorer.
The Quality Debt column shows the number of severity 1 and 2 static analysis violations, number of test failures, and the Logical Lines of Code score. A Quality Debt score is calculated from these values. You can perform the following actions:
- Click on the Test Failures link to view the tests in the Test Explorer.
- Click the Logical Lines of Code link to view the methods in the Metrics Explorer.
Troubleshooting
You will receive an error instead of a rendered widget if the coverage information is inaccessible (see Requirements).
The following error is returned when the filter has a coverage tag that does not contain coverage information:
Verify that the filter has a coverage image associated with it (see Associating Coverage Images with Filters) and that Data Collector is configured to accept the coverage image (see Controlling Coverage Data Processing for DTP Enterprise Pack Artifacts). New reports must be sent to DTP after properly configuration.
The following error is returned when Data Collector is configured to accept your coverage data, but the specific coverage image does not contain data.
Verify that the filter has a coverage image associated with it (see Associating Coverage Images with Filters) and that the correct coverage image was selected when you added the widget (see Widget Configuration).
If an error persists after you have addressed the root cause, you may need to clear the cache.