In this guide:
This guide is intended to help you collect code coverage information for manual regression tests performed on an application under test (AUT) and leverage that information to optimize subsequent manual regression testing sessions with test impact analysis.
The primary audience for this user guide is people responsible for ensuring compliance with your organization's policy regarding the application coverage level, including QA Engineers and testers, developers, and build masters.
Prerequisites
We assume that you are familiar with Parasoft technologies and have already deployed and licensed the following products:
In DTP, create a project for each microservice for which you want to collect coverage. All coverage data for a given microservice will be published to its corresponding project. For details, see the Parasoft DTP user guide at https://docs.parasoft.com.
It is important that the following properties are set when generating and uploading both static and runtime coverage to DTP. Furthermore, in order to correctly associate between reported static and runtime coverage, these values must be identical on a per-microservice basis. For example, the project setting can and should differ to reference two different DTP projects for two different microservices, but for each microservice the static coverage settings and runtime coverage settings should reference the same project. That is, both the static coverage settings and runtime coverage settings for Microservice A should reference the same DTP project, Project A; and the static coverage settings and runtime coverage settings for Microservice B should reference the same DTP project, Project B.
dtp.project
in the settings for static coverage generation and is the DTP project field in the Coverage tab of the Component Instance manager in CTP where the coverage agent is configured for the microservice.build.id
in the settings for static coverage generation and is the Build ID field in the Coverage tab of the Component Instance manager in CTP where the coverage agent is configured for the microservice.report.coverage.images
in the settings for static coverage generation and is the Coverage images field in the Coverage tab of the Component Instance manager in CTP where the coverage agent is configured for the microservice. The value %{dtp_project}
will automatically give the coverage image the same name as the DTP project. If the coverage image has any other name, you will need to update your DTP filter.%{dtp_project}-all
and %{dtp_project}-functional
and %{dtp_project}-unit
.First, in CTP, go to administration (choose User Profile from the username menu in the upper-right corner) and configure your connection to the DTP where you defined your projects.
Next, create a system to represent the architecture of your application. Each microservice should have its own component in this system representation. You will come back to this System later in this guide to add an Environment, for now the System diagram is sufficient. For more details on creating systems in CTP, refer to the CTP user guide at https://docs.parasoft.com.
Static coverage files are used to calculate the denominator of a code coverage metric. This tells us the total number of coverable lines, so that a percentage can be calculated when runtime coverage is measured during testing.
Static coverage files can be created by using Jtest or dotTEST to analyze the project's source code (the preferred method) or using the coverage tool shipped with CTP to analyze the application binaries (jtestcov for Java applications or dottestcov for .NET applications). These coverage tools can be downloaded using the links under the Coverage section of the Getting Started Widget, which can be added or found on the main entry page for CTP. The coverage tools can also be downloaded from a CTP endpoint if used in an automation pipeline.
If development has adopted Jtest or dotTEST in their CI pipelines, these tools can generate the static coverage file from source code and publish it to DTP as an alternative to using the coverage tools shipped with CTP.
If you have access to the application source code, you should generate the static coverage file using Jtest or dotTEST in .xml or .data format if at all possible. The method has several advantages, including containing metadata about user classes, methods, and lines, as well as supporting showing source code annotated with coverage data when viewing coverage results in DTP. It also improves Test Impact Analysis, as static coverage generated this way provides for more precise analysis so tests will only be flagged as impacted if they traverse any methods that have been changed. While you can use TIA with static coverage generated from application binaries, TIA results will be broader (since analysis is done at the level of classes) and some tests may be flagged as being impacted when they don't need to be. See Application Coverage (Jtest) or Application Coverage (dotTEST) for more information on generating static coverage files with those tools.
The jtestcov tool can be found in the downloaded java_agent_coverage zip, under the jtestcov
directory. You will need to create a properties file to add DTP Properties for coverage information. See DTP Properties for coverage section for details.
Use the jtestcov jar with a command like the one below running against Parabank:
java -jar <PATH_TO_jtestcov.jar> -ctp -app c:/<PATH_TO_APPLICATION>/parabank.war -include com/parasoft/parabank/** -settings c:/<PATH TO .PROPERTIES>/jtestcov.properties |
The |
The dottestcov tool is used similarly to the jtestcov tool detailed above. It can be found in the downloaded dotnet_agent_coverage zip, under the dottestcov
directory.
Invoke dottestcov using dottestcov.bat with a command like the example below:
dottestcov.bat -ctp -include "C:\Devel\FooSolution\src\QuxProject***.cs" -exclude "C:\Devel\FooSolution\src\QuxProject*\tests**.cs" -app <DIR>\FooSolution.sln -publish -settings <PATH TO .PROPERTIES>\dottestcov.properties |
The |
# === LICENSE === # === END USER LICENSE AGREEMENT === # Set to true to accept the Parasoft End User License Agreement (EULA). # Please review the EULA.txt file included in the product installation directory. parasoft.eula.accepted=true # === NETWORK LICENSE === # Enables network license - be sure to configure DTP server settings. ctp.license.use_network=true ctp.license.network.edition=custom_edition # Note: customize the CTP coverage agent tier based on what your CTP license has enabled ctp.license.custom_edition_features=CTP, Coverage Tier 5 # === DTP SERVER SETTINGS === # Specifies URL of the DTP server in the form https://host[:port][/context-path] #dtp.url=https://localhost:8443 # Specifies user name for DTP server authentication. #dtp.user=admin # Specifies password for DTP server authentication - use jtestcli -encodepass <PASSWORD> to encode the password, if needed. #dtp.password=admin # Specifies name of the DTP project - this settings is optional. #dtp.project=[DTP Project Name] # === DTP REPORTING === # Enables reporting test results to DTP server - be sure to configure DTP server settings. #report.dtp.publish=true # Specifies a build identifier used to label results. It may be unique for each build # but may also label more than one test sessions that were executed during a specified build. #build.id=${dtp_project}-yyyy-MM-dd # Specifies a tag which represents an unique identifier for the run, used to distinguish it from similar runs. # It could be constructed as minimal combination of following variables that will make it unique or specified manually. # e.g. ${config_name}-${project_module}-${scontrol_branch}-${exec_env} #session.tag=[tag] # Specifies a set of tags that will be used to create coverage images in DTP server. # Coverage images allow you to track different types of coverage, such as coverage for unit, functional, manual tests and others. # There is a set of predefined tags that will be automatically recognized by DTP, see the examples below. # You can also specify other tags that will be used to create coverage images. #report.coverage.images=${dtp_project} #report.coverage.images=${dtp_project};${dtp_project}_Unit Test #report.coverage.images=${dtp_project};${dtp_project}_Functional Test #report.coverage.images=${dtp_project};${dtp_project}_Manual Test # === CONSOLE VERBOSITY LEVEL === # Increases console verbosity level to high. #console.verbosity.level=high |
Defining includes and excludes is an important part of controlling how much your system processes, which can greatly affect how long it takes to generate your static coverage files. In many cases, there is no need to measure code coverage of third-party library code, so it is preferable to limit code coverage to project code using these settings. Includes and excludes should be expressed as comma-separated lists of patterns that specify classes to be instrumented. The following wildcards are supported:
In the following example, all classes from the
While in this example, all classes from the
If you are outside of development, working with the application binaries, and not sure what the appropriate include/exclude settings should be, reach out to your colleagues in development to make sure you use the correct patterns. There is usually a standard pattern for packaging company code, like |
After generating static coverage, it should be visible on DTP. This can be seen as having some number of lines available (with zero overall covered; this will change once runtime coverage is uploaded to DTP in the next steps) on the coverage widget.
Attaching the Coverage Agent to a microservice allows you to enable collecting dynamic (runtime) coverage for it. Typically, this is done as part of an automated deployment process coming from a CI/CD pipeline. As part of this process, you will have the opportunity to review the agent's settings in the agent.properties file; while doing so, ensure that the include and exclude values that are used are the same as those defined in your static coverage process.
The agent.jar file for the agent as well as the agent.properties file for configuring it can be found in the downloaded java_agent_coverage zip, under the jtest_agent
directory.
An argument of the form:
-javaagent:"<PATH_TO_AGENT_DIR>\agent.jar"=settings="<PATH_TO_AGENT_PROPERTIES_FILE>\agent.properties",runtimeData="<PATH_TO_RUNTIME_DIR>\runtime_dir" |
needs to be added to the microservice's invocation or startup script in order to attach the agent to the microservice.
Once started, you can check that the agent is running by going to the agent's status endpoint at http://<HOST>:<PORT>/status
, where <HOST>
is where the microservice is running and <PORT>
is specified in agent.properties; by default, it is 8050.
You can use the Coverage Wizard tool found in the dottest_agent
folder of the downloaded dotnet_agent_coverage zip to set up coverage agents. It can be invoked directly to provide agent configuration via a GUI, or alternatively run using the command line; see the documentation here for more details: Application Coverage for Standalone Applications (dotTEST) or Application Coverage for Web Applications (dotTEST).
The Coverage Wizard will generate a new folder containing the scripts necessary to run and attach the coverage agent to the microservice it is configured for.
Do not stop the coverage agent process until you are completely done with any coverage workflow.
Once started, you can check that the agent is running by going to the agent's status endpoint at http://<HOST>:<PORT>/status
, where <HOST>
is where the microservice is running and <PORT>
is specified in agent.properties; by default, it is 8050.
If you have multiple testers working on a web application at the same time, it is recommended that you run your coverage agents in multi-user mode, otherwise the coverage data that gets collected might be mixed. How you run your coverage agents in multi-user mode depend on which coverage agents you're using.
jtest.agent.enableMultiuserCoverage
and jtest.agent.autoloadMultiuserLibs
are enabled in the agent.properties.jtest.agent.enableMultiuserCoverage
is enabled, jtest-otel-ext.jar and opentelemetry-javaagent.jar will be automatically loaded from the agent.jar directory. All three are included in the coverage agent download and should be deployed to the same directory when the coverage agent was installed.agent_client.exe -multiuser
.Regardless of which coverage agent you are using, each tester's browser needs to be set up to inject the following HTTP header (where <USER>
is the tester's CTP username): baggage: test-operator-id=<USER>
You can inject this header using a browser extension. The screenshot below shows the header injected in a Chrome browser with such an extension.
Go back to CTP Environment Manager, to the system you created during setup. Next create an environment for the system. Create component instances for at least each component representing a microservice that will have a coverage agent attached to it. Configuration for a coverage agent is done through the coverage tab of the component instance manager. Add the coverage agent's URL, as well as the DTP project, filter (optional), build ID, and coverage image settings; these DTP settings should be identical to the settings used when creating and uploading the corresponding static coverage for the microservice this component represents.
If desired, you can additionally fill out other details for component instances. For example, filling out the real endpoints section of the components tab in the component instance manager will allow you to see the status of that endpoint from the environment diagram.
After environment configuration is complete, you can check to make sure all necessary components have their coverage agents configured and connected; components with coverage agents are denoted with a blue C symbol, and a green checkmark signifies a good status for the connection.
Now that the Environment in CTP has been fully setup via the GUI, the CTP Components or the entire Environment can be automatically synchronized from your deployment pipeline via REST API. For example, when a new version of your microservice is built and deployed, it will have a new Build ID that CTP needs to have a reference to. The microservice's deployment endpoint, and thus coverage agent endpoint, may also change, and synchronizing CTP with the current state of your deployed microservices is essential in correctly measuring code coverage and associating it with the appropriate builds that have been deployed. If automation drives the deployment of your applications into a test environment, it is recommended to automate updating CTP at the same time.
You can use the following APIs to update CTP:
Before you run your manual regression tests in CTP, you need to import some basic information about them into CTP and associate them with the system you created. You import manual tests from a CSV file.
The import process reads data from a CSV file. This file must contain a name for the test and can contain additional useful information such as external test ID & URL, description, and requirement ID & URL. It's OK if your CSV has other data, but it won't be usable by CTP and will be ignored by the import process.
The first row of the CSV file should contain column name headers. You will use these headers to associate the data in the file with data elements in CTP, so they should be recognizable. The data that can be imported is described in the table below.
Data Element | Description | Required? |
---|---|---|
Test ID | Identifying name or number for the test. This is what will appear in DTP traceability reports and as clickable text (going to the Test URL as configured below) in CTP and DTP. | No |
Test URL | URL for the test. This should be a properly formed URL, including the protocol (http(s):// ). | No |
Name | Name for the test. This is often more descriptive than the Test ID. | Yes |
Description | Longer description of the test. | No |
Requirement ID | Identifying name or number for the requirement. | No |
Requirement URL | URL for the requirement. This should be a properly formed URL, including the protocol (http(s):// ). | No |
Each element except for Description has a limit of 255 characters and the URLs should be properly formatted. The order of the data in your CSV is not particularly important; you will be able to associate the data in the CSV with CTP data columns during the import process.
When you have your CSV file, open the Manual Testing module in CTP. All of your systems and their environments will be listed here alphabetically. Choose the system you want and click Import Tests then browse to your CSV file. Once the file has been read, a dialog will appear showing the CSV's file options plus a table for associating its data with CTP data columns. The CSV headers are shown on the left and the CTP data elements are shown along the top.
Use the table to associate your CSV data with the appropriate CTP data elements. Since Name is required, it is enabled by default as the first row. If Name is not the first column in your CSV, change it to the correct association, then choose the remaining data associations as appropriate for your CSV. When you're done, click Import. The tests will be shown in the system's table on the page. You can edit or delete tests from there if you need to.
Once you have imported your manual tests and associated them with CTP systems, you can collect test status and code coverage data while you perform them and publish it to DTP. The basic workflow is:
In the Manual Testing module, choose the environment for which you want to start a session. Click Start Session and give the session a name. Session names must be unique within the environment. Then click Start. A new session will start with your tests.
When you start a manual test in CTP, test data is collected as you perform the steps in the test. You can publish this data to DTP when you're done testing.
If any of the components in a system under test change to a newer build, you will not be able to start a new manual test in the manual test session. Publish the manual testing results that you have so far and start a new manual testing session for the updated build IDs.
Open the Manual Testing module and navigate to the session with the test you want to start. Click the test you want to run from the tests in the Manual Test Results table and click Start Test.
Each tester can only have one manual test running per environment at any time. However, multiple tests can be running simultaneously (one test per tester) if all of the coverage agents deployed in that environment are set up in multi-user mode (see Coverage Agents in Multi-User Mode). If any coverage agent in the environment is running in single-user mode, you will only be able to run one test at a time in the test session. If you plan to have multiple testers running tests simultaneously in the test session, do not forget that each tester must use a browser extension while testing that passes an additional HTTP Header with their username (see Coverage Agents in Multi-User Mode). You might choose to setup coverage agents in single-user mode to eliminate the requirement of the tester passing in an HTTP Header during test execution, just be aware this limits testing to being single threaded, only allowing one test at a time to be run regardless of how many testers there are. In addition, coverage agents in single-user mode measure coverage indiscriminately, meaning if the application is being used by someone else while the manual tester is performing the test, all of the executed code lines will be included in the measured code coverage. It is generally a good idea to setup the coverage agents with multi-user mode unless it is a dedicated test environment that would not be in use by others while manual regression testing is being performed. |
Once you're done with the manual test that was started, stop the test in CTP and indicate the result. Manual tests can only be stopped by the user who started them or by an administrator.
Open the Manual Testing module and navigate to the session with the test you want to stop. Click the test in the Manual Test Results table and click Stop Test. Click a result (Passed, Failed, or Incomplete) and add notes, if desired. Notes will be published to DTP along with the result and can be helpful, for example, to explain why a test failed or could not be completed, then click Finish.
You can also abort the test here by clicking Abort Test. This will stop the test without recording a result and delete any coverage data collected. You will be able to restart the test to run it again, if you want.
When you have completed your manual regression tests, you can stop the session and send the data that was collected to DTP. You will also have the option to use that session as a baseline build for test impact analysis. Be aware that you cannot stop a session that has a manual test running. Stop any test that is running before trying to stop a session.
Open the Manual Testing module and navigate to the environment that contains the session you want to stop. Click Stop Session and enable Publish to DTP.
Publishing test status and code coverage data to DTP can only be done while stopping an active test session. It cannot be done after the test session is closed. CTP will not save or back up the session's coverage data and unpublished data will be lost when the session is closed. |
If you want to use this coverage data as a baseline build for test impact analysis, enable Create baseline and enter a baseline name.
You only need to worry about creating baselines if you are implementing the Test Impact Analysis workflow outlined in the following section. Test Impact Analysis requires a baseline build with coverage from a full regression test run to use for comparison when calculating impacted tests from a subsequent build. Enabling Create baseline marks a build as a baseline build to be compared against. The build will be automatically archived in DTP so the data is available when CTP queries DTP to retrieve impacted tests. There can be different cadences for when a baseline is set at the end of a testing session where the Parasoft coverage workflow has been integrated into it. A baseline should be set whenever a complete regression test run is performed, whether that happens nightly, weekly, or some other frequency. This is because you want a complete code coverage mapping of all your test cases to compare with when Parasoft analyzes code changes between builds. CTP and DTP can maintain multiple baselines for comparison, where it may be desirable to compare code changes from logical points in the development process. Customers typically choose to baseline from the beginning of a new release cycle, the beginning of a sprint, or as aggressively as the latest full regression test run. |
Coverage statistics should now be visible in DTP for the microservice projects.
By clicking on a coverage widget, we can drill down and view coverage statistics not only for the whole project but individual files and methods as well.
Notice the screenshot shows "Source code not available." This is because the screenshot was taken with a coverage workflow using application binaries instead of source code. To see source code with red/green line coverage markers, the Jtest or dotTEST products are required as they produce the static coverage from sources instead of binaries and transmit the source code in addition to static coverage to DTP for viewing.
You can use CTP and DTP to not only measure code coverage of microservices during functional testing, but also to enable Test Impact Analysis: an automated process for optimizing which tests are included for execution based on code changes in an application. Code coverage from test executions against a baseline build is used in conjunction with a code diff between that baseline build and a target build to calculate which tests are impacted for testing the target build. Test Impact Analysis enables a dramatic reduction of how many tests you can choose to run by showing you which tests can be skipped due to those tests not covering any of the changed code from your baseline build. It is a valuable technique when grappling with very long testing jobs that prevent fast feedback to development about changes made in a build.
Prerequisite: Modify your microservice(s).
Step 1: Generate New Static Coverage for Each Microservice.
Step 2: Attach Coverage Agents to Each Microservice.
Step 3: Update CTP Environment with New Build Details.
Step 4: Call CTP REST API to Retrieve Impacted Tests.
To conduct Test Impact Analysis, it is necessary to have a baseline build with coverage information from a full regression test run to compare against. Follow the Manual Test Workflow defined in the previous section, including creating a baseline when stopping a manual test session that will be used as a reference point for comparing with a target build in this workflow.
After changes have been made to your microservices that trigger a new target build, the following steps define how to integrate Test Impact Analysis into your test execution.
Use the Step 1 of Manual Test Workflow and generate new static coverage files for each microservice that changed, using new build IDs to describe the new builds that are to be tested.
These static coverage files will be published to DTP, which is how DTP will know what code changed between the baseline and target builds in order to calculate impacted tests.
This part of the deploy process is no different from Step 2 of the Manual Test Workflow. The updated microservices should get deployed to your test environment in preparation for the testing phase.
While it is technically not necessary to deploy coverage agents to run impacted tests, it is recommended to maintain a consistent deployment process where the coverage agents are part of the standard deployment to your test environment.
Similar to Step 3 of the Manual Test Workflow, the same CTP environment should get updated about the new microservice builds during the deploy phase of the CI/CD process. It is important for CTP to have a reference of the new Build IDs of each microservice, where they match the Build IDs used to generate the new static coverage files from Step 1 of this workflow.
Since CTP needs to be updated every time new microservice builds are deployed into a test environment, it is recommended to use CTP's REST API to automate the updates during this deploy phase of the pipeline.
The Test Impact Analysis workflow requires configuring the optional Filter setting in CTP's coverage settings for a Component Instance. DTP projects can have a number of Filters that are used to segment the data it is reporting. By default, each DTP project starts with one filter that auto-includes all reports published to that project, and the CTP component instance UI will initialize to this filter by default. If updating CTP Environments via REST API, be sure the Filter setting is retained. For advanced users that use multiple DTP filters per project, be sure to reference a filter that contains the coverage reporting that CTP publishes to the corresponding DTP projects for each microservice. |
At this point, both your test environment and Parasoft platform should be ready for your optimized test execution with Test Impact Analysis. Before running manual regression tests, you need to retrieve the list of impacted tests from CTP in order to know which Test IDs should be included. You can do this with the CTP REST API or from the DTP Test Impact widget.
If you are collecting code coverage from a microservices architecture that spans multiple DTP projects, you should use the CTP REST API to retrieve the list of impacted tests because CTP will handle organizing and deduplicating the impacted tests across all of the projects in your environment. If you are performing test impact analysis that corresponds to a single DTP project, you can get impacted tests from the DTP Impacted Tests widget as described in step 4.b.
Send a REST request to the CTP API endpoint at
where environmentId
is the ID of the environment used for both the baseline and the target (new) build. Use the baselineBuildId
query parameter to specify the baseline build ID you chose in Step 5.d of the Manual Test Workflow when running your baseline test build.
The endpoint will return a JSON list of tests which DTP has determined have been impacted by code changes between the two builds. In the case where there are multiple microservices with new builds, CTP will query DTP for impacted tests for each changed microservice and then eliminate duplicates so the correct set of impacted tests is returned for all changes across the distributed system that was newly deployed in the test environment. For example, here is a response consisting of a single impacted test:
[ { "id": "5f6edb09-8e34-3885-aa7f-32435d3c8a8f", "testName": "first visit", "analysisType": "FT", "testSuiteName": "first visit", "toolName": "CTP" } ] |
If you are performing test impact analysis for a single DTP project, you can utilize the reporting within DTP to see which manual regression tests should be re-executed.
Open DTP Dashboard in your browser. If you do not already have a Test Impact Analysis widget installed on the dashboard, follow the "Installing the Impacted Tests Widget" process described below. See "Using the Impacted Tests Widget" for information about getting impacted test information from DTP.
Installing the Impacted Tests Widget
Using the Impacted Tests Widget
Open DTP to your dashboard. The Impacted Tests - Summary widget shows the number of impacted tests over the total number of tests analyzed as well as the baseline and target builds.
Click on any part of the widget to drill down to the Impacted Tests report.