Load Test provides a number of ways to customize load tests. This topic explains how to customize load tests and the available customization options. In this chapter:
You customize how a particular test is run by customizing the scenario you plan to use. You can use scenarios to customize the following parameters:
To customize an existing scenario or create a new scenario:
Enter a new name in the Name field to change the scenario name.
The scenario type determines several aspects of how the scenario can be configured. You can specify the exact number of virtual users or hits per second for each test suite profile. You can also express the test suite profile distribution as a general ratio and let Load Test randomly assign test suite profiles according to the specified ratio.
Choose one of the following from the Scenario Type menu:
Enter new values in the Duration fields. The following duration limits apply:
The Controlled Parameter settings determine whether the test is controlled for number of virtual users or the number of hits per second (hit rate).
Choose one of the following options from the Controlled Parameter menu:
Randomization settings determine the randomization of the Controlled Parameters.
If the Controlled Parameter is set to Number of Users, Load Test can randomize the virtual user think time according to the following distributions:
If the Controlled Parameter is set to Hits per Second, Load Test can randomize the inter-invocation time according to the following distributions:
Choose a value from the Vertical scale menu to set the maximum number of users or hits to be simulated during the test run. The maximum value you can set in a scenario is determined by your license. To see license details, go to Help > About.
If the Controlled Parameter is set to Number of Users, the load applied is the sum of users and hits on all machines in the selected scenario.
If the Controlled Parameter is set to a number of hits configuration, the load applied is determined by the number of hits per second you implement for a given test, as well as the number of machines running the tests. Profile delay options will be unavailable and any preexisting delay settings will be ignored. To determine the number of hits per second you are licensed to execute, divide the number of virtual users for which you are licensed by 10. For example, if you are licensed to use 100 virtual users, you will be able to execute tests at 10 hits per second.
The graph is a visual representation of how Load Test is configured to apply the number of users or hit rate over time. Interfaces for configuring the graph differ depending on whether you are configuring profiles directly or configuring weighted profiles.
Enable the Machine Independent option to apply the same settings to all test machines (default). This option is available only if there is more than one machine is available. The graph will show one line labeled All Machines. This option enables you to modify the position and shape of the line for all machines simultaneously. Disable the option to apply different settings to different test machines. You can also enable machines individually.
If you are configuring profiles directly (Scenario Type is set to Direct Profiles) and more than one test suite profiles are available, you can enable the All option to show all profiles or enable individual profiles.
See Vertical Scale for details on changing the graph scale.
You can modify points on the graph to change its shape.
You can also click and drag points in the graph.
To maintain a steady amount of virtual users or hit rate, ensure that the line is flat, then drag the line up or down until it corresponds to the number of users or hits per second you want for the entire test duration.
You can also right-click on the line in the graph and choose Add point to create additional nodes.
You can click and drag points or right-click and choose Edit point to specify a precise number of virtual users or hits per second. Load Test will automatically determine the number of virtual users or hits per second for each time interval that does not have a specific point by plotting the slope between each of the points.
You can also click More points to automatically double the number of points you have created.
The new points will be added at regular intervals between your existing points. To remove one point, right-click a particular point and choose Remove point. Click Less points to remove points that are not points of inflection.
You can set the graph to a preset shape.
You can specify the chances that Load Test will select a particular profile at any point in the test duration. The higher the ratio assigned to a profile, the more likely it is to be chosen. Weighting profiles only applies if multiple test suite profiles are available.
You can save any scenario graph (configuration graphs as well as test progress graphs) as a GIF by right-clicking the graph that you want to save and choosing Save Image. |
The Load Test Quality of Service (QoS) analysis feature allows users to estimate the performance of a service/application by applying QoS metrics to Load Test results. Based on these metrics, Load Test can evaluate performance to determine the success or failure of the load test. QoS analysis is particularly important when a service/application is required to meet specific performance requirements before it can be considered ready for deployment.
QoS Metrics allow you to define a set of regression tests for a load test. On a succeeding load test, all metrics will pass successfully. Any metrics that fail are indications of a regression in the performance of their service/application.
QoS functionality can be accessed from the load test tree. Under each of the Scenario nodes (Bell, Buffer Test, Linear Increase, Steady Load) is a corresponding QoS node. After selecting a QoS node, the Summary and Details tabs are shown in the right GUI panel.
The Summary tab shows a global view of QoS metrics that have been configured within the Details tab. The Summary tab contains a list of metrics and their descriptions. There are five default metrics in the Summary tab Metric list:
Each of the above metrics are grouped according to its type (for example, Statistic, Count, Throughput, Loss). Any additional metrics you configure will also appear in the Summary tab.
The Description list contains automatically generated descriptions of what each metric is verifying based on the parameters that have been specified for that metric.
The Details tab displays the available metrics that have been configured for the selected scenario. The Details tab also shows the name, parameters, and notes for the metric selected from the Metric List.
To create a new QoS metric:
Select the desired QoS metric from the Add Metric wizard and click Finish.
The QoS metric is added to the left panel while its parameters display in the right panel of the Details tab.
Each metric performs a different set of validation on completed load test results. After a load test is run, a QoS Report Summary will display in the Test Information report, while a separate and more detailed QoS Report will also be available.
The selections you make in the Test Tree Selection panel (described in Test Tree Selection Panel Navigation and Options) within QoS Metric Machines, Profiles and Tests filter panels affect the data sets that are used for calculation of the metric value. This will subsequently affect the success or failure outcome of the metric. You can preview the data used in calculation of the metric types listed below in the relevant sections of the load test report.
Also, see the tool tip displayed in the Parameters panel of a QoS Metric view for further metric explanation based on current metric values.
To make changes to any of the QoS metrics, select a metric from the list in the left pane of the Details tab and configure its parameters accordingly in the right pane of the Details tab. Depending on the metric selected, the options will vary. However, if you hold your cursor above the parameters in the right pane, a tool tip will display information about the parameter configuration.
You can also select one or multiple (CTRL-click) metrics from the list, then right-click your selection(s) and Cut, Copy, Paste, or Delete as needed from the shortcut menu. When right-clicking one or multiple (CTRL-click) metrics from the list, you can choose Save Metrics As to save the selected metrics as a QoS metric set (.ms file), and you can also choose Load Metrics to load previously saved metric sets. You can also change the order of the list by dragging and dropping the desired metrics into the desired order.
For scenarios that contain a large number of profiles, and/or a large number of graph points, it may become difficult to configure and keep track of the various options in each graph of the User and Profiles tab. In such cases, it may be easier to use the Summary Table and Details Table available for each scenario.
To view the Summary Table and Details Table, select the desired Scenario node and the Summary Table and Details Table tabs display in the right GUI panel.
Within the Summary Table and Details table views, you can edit the green fields such as the Number of Users for each machine and the Profile Weight of each profile. Changes you make in these green fields will reflect in the corresponding Users and Profiles tabs and vice versa. For example, if you change the Profile Weight of the Default User from 1 to 5, that change will reflect in the Profiles tab. Similarly, the same change in the Profiles tab will reflect in the Summary Table tab or Details Table tab.
The Summary Table view provides different information depending on whether Weighted Profiles or Direct Profiles are selected as the Scenario Type.
The Detail Table view provides different information depending on whether Weighted Profiles or Direct Profiles are selected as the Scenario Type.
An environment is a collection of variables that can be referenced in your Load Test configuration. When running a test, Load Test will substitute the names of variables in your project configurations with the values of those variables in the active environment. By changing which Environment is active, you can quickly and easily change which values Load Test uses.
Creating and switching environments is done through the Environments tab. To access this tab:
The Environments tab is divided into two sections, the Environments List panel (left) and the Environment Details panel (right).
The environments list panel contains the following buttons to manage environments:
Add: Click to add a new environment to your project. Click the down arrow on the right of the button for the following options:
*.env
file into your project. Subsequent changes to the *.env
file will not be reflected in your project.*.env
in your project. Subsequent changes to the *.env
file will be reflected in your project.After creating and/or selecting an environment, the Environment Details panel will display a table and buttons for managing environment variables.
To edit an existing environment variable, either double-click on a value or simply click to highlight the value and type to overwrite it.
If the SOAtest project that you are using for load testing contains tests with Call Back tools and Asynchronous testing features, or it contains Message Stub tools, you will need to start the SOAtest server inside Load Test in order to process the SOAtest project related asynchronous messaging.
You can start the SOAtest Server in two ways:
If you are running a load test with SOAtest asynchronous tools in command line mode or in Load Test server mode, you will need to enable the automatic SOAtest Server startup on each of these machines (because in command line mode, the SOAtest Server can only be started automatically).
If your SOAtest .tst file references data sources, you can configure how those data sources are used during load testing.
To configure data source options:
If your SOAtest .tst file uses setup or teardown tests, you can configure how those tests are used during load testing.
To configure setup/teardown options:
Each time a virtual user is created, it is based on one of the available profiles. Virtual users are created at the beginning of a load test, and whenever a virtual user completes a test run and the specified load test duration has not yet been exceeded.
You can configure any number of profiles or machines, but the number of virtual users or number of hits per second you use is limited by your license. For example, if you are only licensed to run 100 virtual users, you can configure any number of profiles, but only 100 virtual users based on those profiles can be active at any given time. If you have 20 profiles, but are only licensed to run one virtual user, all 20 profiles will eventually be used during the test (provided that the test has a sufficient duration), but only one profile will be active at any given time.
To customize the default profile or create a new profile:
If you want different tests to use different delays, you can configure the delays at the test level in SOAtest (in the Execution Options > Test Flow Logic tab within the test suite’s configuration panel). Any delays configured at the test level will override delays specified in the Load Test profile. The Load Test Begin-to-Begin/End-to-Begin selection will not be applied for tests with individual delays. |
It is sometimes desirable to limit the Hit per Second rate in a load test scenario that has Virtual Users selected as a controlled parameter. There are two ways to limit the hit per second rate in the Virtual User mode:
The following image shows an example of configuring Hit-per-Second throttling in the Scenario Summary Table view:
The Report Settings tab of the Scenarios control panel lets you configure what data is recorded during the load test.
Available options are:
Note that the Record Individual Hit Details option can be used when the Record Individual Hits option is disabled. In this case, the individual hit data (the scatter view section of the report) will not be recorded, but the test execution details will still be available via the Show Recorded Details right-click menu option of the load test report graph view.
For information on viewing the Success and Error details in the load test report, see Detailed Reports.
The Stop Settings > Stop Sequence area of the Scenarios control panel lets you configure how Load Test behaves after a load test Stop event has been generated and the actual stop of the load testing activities. (We will use the term "Load Test Stop time" to refer to the time between the moment a load test Stop event is generated and the actual stop of all load testing activities.)
A load test Stop event is generated if:
After a Load Test Stop event is generated, Load Test stops creating Virtual Users and waits for the existing Virtual Users to exit. The Quick Stop and Formal Stop options control whether the Virtual Users exit after the completion of the current test or complete all their scheduled tests:
If the stop procedure is taking too long, use the Force Stop button in the Load Test stop progress dialog (See Tip: Stopping a Test for more details). |
You can customize how a load test is stopped by utilizing Java, JavaScript, or Jython scripts. This feature is especially useful in automatically stopping load tests for specific circumstances that may cause undesirable results. For example, during nightly load test runs you may want to stop load testing if a certain amount of errors have been reached, or you may want to stop load testing if your CPU utilization reaches a specific threshold.
The script you define communicates with the Load Tester via the com.parasoft.api.loadtester
API and receives LoadTestScriptArgs
and Context
as arguments, and can also return a LoadTestScriptAction
. A returned LoadTestScriptAction.Action_Stop
has the same effect as clicking Stop in the Load Test GUI during a load test.
You can find sample scripts available in the following directory on your machine:[Load_Test/SOAtest_installation_dir]\[version number]\examples\loadtest\LoadTesterScripting. |
To use scripts to customize Load Test stop actions:
High Throughput Mode can be used to apply higher levels of load to the system under test while minimizing the hardware required to generate such load.
Each Load Test machine participating in a load test can be configured to run in a High Throughput Mode. In this mode, the test response verification and non-critical attached tools are disabled. As a result, fewer system resources, such as CPU cycles, are required to run the load test. This allows machines in High Throughput Mode to run at a higher test per second execution rate.
To enable this mode, select the appropriate machine from the Machines node in the Load Test Configuration tree, then enable High Throughput Mode.
Typically, users select some of the machines to run in a regular Verified and some in an Unverified – High Throughput mode. The machines that run in the Verified mode act as both load generators and Test result collectors, while the machines in the Unverified mode act only as load generators. Upon the completion of a load test, the results which were collected in the Verified mode are used to estimate the error count intervals for the machines in the Unverified mode and Total error count estimate intervals for the entire load test.
Below are several typical questions and answers that could help better understand how to apply the High Throughput Mode feature:
You should use the High Throughput Mode if you do not have enough hardware to generate the desired load. For instance if your load generating machines are running at or above 75-80% CPU utilization.
The throughput increase will vary depending on the structure of your functional tests. The more tools you have attached to the tests in the functional test configuration, the greater the performance gain in the High Throughput Mode.
The calculation of the error estimates is based on the calculation of the "Binomial proportion confidence interval" using Wilson’s method. More on this subject can be found via the following links:
In general, the more tests you run in the Verified mode, the more narrow error estimate intervals you will get. As a rule of thumb, the number of runs for each Test in the Verified mode should be at least in the mid tens, such as 40-60.
Parasoft Load Test can be configured to use multiple IP addresses for its Virtual Users. Configuring this IP spoofing involve two steps:
The following SOAtest tools and transports support the multiple IP address functionality:
To set up IP aliases on a Windows machine:
ipconfig
command to see the list of available network interfaces, their IP addresses, and masks.netsh
command in a DOS prompt window (DOS shell) or in a batch script to add or remove IP addresses.To add an IP address, run a netsh
command similar to the one in the following example (substituting your own network interface name, IP address, and mask).
netsh -c Interface ip add address name="Local Area Connection" addr=10.10.29.9 mask=255.0.0.0 |
To remove an IP address, run a netsh
command similar to the one in the following example (substituting your own network interface name, IP address, and mask).
netsh -c Interface ip delete address name="Local Area Connection" addr=10.10.29.9 |
To set up IP aliases on a Linux machine:
ip
command to add or remove IP aliases to a network interface.The example below adds an IP alias to eth0 interface:
# ip address add 10.10.29.9 dev eth0
The example below removes an IP alias.
# ip address del 10.10.29.9 dev eth0
To configuring Virtual Users to use multiple IP addresses:
The machine network interface and IP configuration view is shown below:
During the load test execution, each Virtual User on a machine will be assigned an IP address from the list of IP addresses for this machine.
Virtual Users (VUs) will be assigned IP addresses in a round-robin manner. For instance, if the machine "localhost" was configured to use two IP addresses as shown in the image above, then the VUs would be assigned IP addresses in the following order:
Once an IP address is assigned to a Virtual User, it will be used in all the tests which that Virtual User has to execute.
A SOAtest project that you use in Load Test can have external dependencies in the form of files that are used by the SOAtest tests while they are being executed by the Virtual Users of the load test. The correct functioning of the SOAtest project tests depends on the availability of these external resources on both the Load Test controller machine and remote machines running load generators. Load Test automates the process of transferring the external dependencies to remote machines.
For details on how to ensure that all external resources required by the project are available on the remote machine, see Transferring Project External Dependencies to Remote Machines.