Page tree

Skip to end of metadata
Go to start of metadata

In this section:

Introduction

Run configurations specify details about how tests are executed. The Selenic run configurations panel enables you to configure which functionalities should be enabled during execution. Refer to the Usage section for additional information on executing the run configurations you create. 

The following video tutorial describes how to configure Selenic run configurations.

Creating a Run Configuration with Default Settings

The recommended workflow is to select a test, package, or project and run it as a JUnit or TestNG test, which creates a configuration with default settings that you can modify. 

Eclipse

  1. Select a test, package, or project and choose Run> Run with Selenic> JUnit Test or TestNG Test. If the necessary WebDriver(s) are configured in the Parasoft Preferences page, then the test will start running (see Installing WebDrivers). 
  2. When the test execution completes, choose Run> Selenic Configurations... 
  3. Click on the auto-generated run configuration under JUnit or TestNG in the sidebar menu.
  4. Click the Selenic tab and update the run configuration settings accordingly. See Selenic Run Configuration Settings.

  5. Click Apply to save the configuration.

IntelliJ

  1. Right-click on a test, package, or project and choose Run '<Scope>' with Selenic. If the necessary WebDriver(s) are configured in the Parasoft Preferences page, then the test will start running (see Installing WebDrivers). 
  2. When the test execution completes, choose Run> Edit Configurations... 
  3. Click on the auto-generated run configuration under JUnit or TestNG in the sidebar menu.
  4. Click the Startup/Connection tab and update the Selenic run configuration settings accordingly. See Selenic Run Configuration Settings.

  5. Click Apply to save the configuration.

Manually Creating Run Configurations

You can manually create run configurations from the Selenic Configurations page. 

Eclipse

  1. Choose Run> Selenic Configurations...
  2. Choose either JUnit or TestNG in the sidebar and click the New launch configuration button
  3. Click the Test tab and configure the test execution settings.
    1. JUnit
      • Choose either JUnit 4 or JUnit 5 as the test runner. The test runner must match the framework your tests were written for. If you created the project with Selenic, the framework configured in the Selenic preferences page will be used. See Configuring Test Creation Settings.
      • Enable the Run a single test if you want to use the configuration to run a single test. Specify the project, test class, and test method in the appropriate fields.  
      • Enable the Run all test in the selected project, package or source folder option if you want to run a group of tests. Specify the project, package, or source folder.
    2. TestNG
      • Specify the project containing your tests.
      • Browse for the class, method, groups, package, or suite of tests you want to run in the appropriate field(s).
      • Specify a log level and serialization protocol .
  4. Click the Selenic tab and enable your options. See Selenic Run Configuration Settings.
  5. Click Apply to save the configuration.

IntelliJ

Refer to the IntelliJ documentation for additional details about configuring tests.

  1. Choose Run> Edit Configurations... 
  2. Click on a JUnit or TestNG template and click the Configuration tab.
  3. Choose the classpath containing your tests from the Use classpath of module: field.
  4. Configure the scope of the run configuration from the Test kind:, Class: and Method: fields and click the Startup/Connection tab.

  5. Enable your options (see Selenic Run Configuration Settings) and click the Create configuration link.
  6.  Specify a name for the configuration and click Apply.

Selenic Run Configuration Settings

The following table describes Selenic run configuration settings:

Generate recommendations

Enable this option to generate element locator recommendations. Selenic analyzes previous test runs to generate recommendations. 

The test must be successfully executed with Selenic at least once for Selenic to generate recommendations.

The following options are available:

  • Failed LocatorsEnable this option to generate recommendations for failed element locators only.
  • All locators: Generates recommendations for all element locators. The Selenic Recommendations view will show the recommendations for all locators, but the HTML report will still only show recommendations for failed locators.
Self-healing

Self-healing allows test execution to continue when changes to the application may otherwise halt testing activities. Self-healing is typically enabled when executing tests from the command line. Enable this option and configure the following self-healing settings.

Locators 

Enable this option and Selenic will automatically attempt to update broken element locators during execution. Selenic uses historical data about successful test executions to determine the best possible locator to replace a broken locator. Healed locators are flagged and included in the report for further investigation.

Wait conditions 

Enable this option and specify by how much Selenic should extend Selenium wait conditions to prevent a timeout.

You can specify a percentage of the original value that should be added when a wait condition is identified. For example, if a wait condition is configured for 4 (in seconds), setting the Additional time added to existing wait (percent) field to 50 extends the wait condition to 6.     

You can also specify a minimum number of seconds to add for all wait conditions identified by Selenic in the Minimum additional time (seconds) field. 

Take screenshots on failuresEnable this option to take a screenshot of the browser when a failure occurs.
Open HTML report after execution

Enable this option to automatically open the report when Selenic finishes running.

The report will open either in an internal or external browser. You can configure how the report opens in the IDE preference settings. 

When this option is enabled, the Report when execution time exceeds threshold option becomes available.

Report when execution time exceeds threshold

Enable this option to use the performance benchmarking feature. During performance benchmarking, the average execution duration per test method is calculated based on a minimum set of test runs. Test methods that exceed the average duration by a specified percentage are flagged in the report. Only runs for passing tests are included in the calculation when determining the average.

By default, at least five test runs are required to benchmark performance. The minimum number of test runs is only configurable when running Selenic on the command line (see -performanceBenchmarking).

By default, the benchmark threshold is 20 percent, but you can configure the threshold in the Execution time threshold (percent) field.

Execution time threshold (percent)

Specifies the test execution duration threshold for the current test run against the average execution time. If tests in the current run exceed the average test duration by more than the specified percentage, tests will be flagged in the HTML report (see Viewing the Report). 

The Report when execution time exceeds threshold must be enabled to specify the threshold.  

Default is 20 percent.

Create API tests with Parasoft SOAtest

Enable this option to create API tests for REST calls made by the application under test during test execution. 

API tests will be added to the workspace of the SOAtest server specified in the API Test Creation Options. A separate license feature for API test creation is required for the SOAtest server. See Parasoft Recorder for details about API test creation. 

  • Proxy port: Port number on which the SOAtest Web Proxy is running. 
  • SOAtest host: The host or IP address where the SOAtest server is running. 
  • SOAtest port: Port number on which the SOAtest server is running.
  • Username/Password: If authentication is enabled for the SOAtest server, specify your credentials in these fields. 

Only one .tst file can be created at a time if you enable this option. If you are running tests concurrently, API tests will only be created for one scenario at time. The following error will be shown in the log for other scenarios running at the same time:

[ERROR] ParasoftSelenicAgent - Failed to start web proxy session: A session is currently in
process. Wait for the session to finish before starting another one.

The test framework should be configured to run in a single thread (one test scenario at a time) in order to capture API tests for all test scenarios.

  • No labels