You can set up a cluster of Parasoft Virtualize servers to achieve horizontal scaling and fault tolerance. Horizontal scaling is commonly used to optimize performance testing and cloud-based infrastructures for continuous integration and continuous delivery. Fault tolerance is commonly used to increase uptime and/or facilitate disaster recovery for business-critical backend systems.
In this section:
Prerequisites
- Two or more Virtualize servers. The servers must be the exact same version, including the same service pack (if applicable), and they all must have server licenses.
- Two or more servers to host two or more instances of Virtualize.
- A load balancer that supports source address affinity persistence.
- Access to a source control system.
Load Balancer Configuration
Configure your load balancer to route loads to each Virtualize server. If you are using Parasoft CTP or any other tool that interacts with assets through the Virtualize REST API, you will need to perform the additional configuration outlined in the following sections.
The following instructions apply to any load balancer, but we are using F5 Local Traffic Manager (LTM) as an example in order to provide general guidelines on load balancer configuration.
Configuring Source Address Affinity Persistence
This section describes how to configure source affinity persistence for Parasoft CTP. The same principles can also be applied to configure any other tool that interacts with assets through the Virtualize REST API.
Source address affinity persistence ensures that all calls from CTP are routed to the same node. Session affinity is sometimes called "sticky sessions."
Enable source address affinity persistence for the following ports:
- 9080 - Virtualize default HTTP connector (Required)
- 9443 - Virtualize default SSL connector (Required)
- 2424 - Data Repository (Required if a Data Repository exists on this node)
- 9617 – The built-in event monitor provider service (Active MQ)
- 9618 – The built-in server hit statistics provider service
- 9619 – The built-in JDBC provider service
With F5 Local Traffic Manager (LTM), source address affinity persistence is configured by enabling the Match Across Services and Match Across Virtual Servers options. You could, for example, use the default source_addr profile or create a custom profile to enable source address affinity persistence. The following table shows the settings and values for the default source_addr profile.
Setting | Description | Default Value |
---|---|---|
Name | Defines a unique name for the profile. Required. | No default value |
Persistence Type | Defines the type of persistence profile. Required. | Source Address Affinity |
Match Across Services | Indicates that all persistent connections from a client IP address which go to the same virtual IP address should also go to the same node. | Enabled |
Match Across Virtual Servers | Indicates that all persistent connections from the same client IP address should go to the same node. | Enabled |
Match Across Pools | Indicates that the LTM system can use any pool that contains this persistence entry. | Disabled |
Timeout | Indicates the number of seconds at which a persistence entry times out. | 180 |
Mask | Defines the mask the LTM system should use before matching with an existing persistence entry. | 0.0.0.0 |
Map Proxies | Enables or disables proxy mapping. | Enabled |
Using Priority-based Member Activation
Priority-based member activation ensures that all calls from any origin will go to the main machine first (if possible). To configure this, set the first machine as the highest priority. For example, the following is a sample pool configuration file from F5 documentation for Local Traffic Manager:
pool my_pool { lb_mode fastest min active members 2 member 10.12.10.7:80 priority 3 member 10.12.10.8:80 priority 3 member 10.12.10.9:80 priority 3 member 10.12.10.4:80 priority 2 member 10.12.10.5:80 priority 2 member 10.12.10.6:80 priority 2 member 10.12.10.1:80 priority 1 member 10.12.10.2:80 priority 1 member 10.12.10.3:80 priority 1 }
Virtualize Configuration
Ensure that the load balancer is configured so that changes are propagated to only one node before configuring Virtualize for load balancing.
- Share the VirtualAssets project and all of its associated content (.pmpdd, .pvadd, .pjcdd, VirtualAssets.xml, .git, etc.) across the cluster. We recommend using a
git fetch
command as a post commit trigger to synchronize sharing. - Enable refreshing using native hooks or polling:
- From the GUI:
- Choose Window > Preference > General > Workspace from the Virtualize main menu.
- Enable the Refresh using native hooks or polling option and restart the server.
- From the command line:
- Open the org.eclipse.core.resources.prefs file in the
<WORKSPACE>/.metadata/.plugins/org.eclipse.core.runtime/.settings/
directory. Add the
refresh.enabled=true
property, for example:eclipse.preferences.version=1 version=1 refresh.enabled=true
Restart the server.
- Open the org.eclipse.core.resources.prefs file in the
- From the GUI:
- Add the following Java option to your startup command on all nodes in the cluster to put the server.
-J-Dparasoft.auto.deploy.new=false
- Restart your servers.
Deployments in the Virtualize cluster will eventually synchronize. When a request deploy is sent, a single server processes the request and then returns a response. The file system will notify the other servers in the cluster to make the changes required for consistency. In some cases, this may take several seconds. As a result, there may be a brief time when some servers are not yet aware of a deployment that happened on another server.
Parasoft Data Repository Configuration
Parasoft Data Repositories, which are based on MongoDB, should be configured for clustering by identifying a primary Data Repository (the one that has the data you want replicated) and creating a replicate set. To do this, follow this procedure (taken and modified from the MongoDB documentation):
- Stop all of the Data Repositories you want to become part of the replica set.
Start the primary repository server:
./bin/mongod --port 2424 --dbpath repositories
If the "repositories" directory does not exist, you should create it.
Connect to the mongo shell and create the admin database.
./bin/mongo --port 2424 use admin
Run the following command to create admin users:
db.createUser({user:"<username>",pwd:"<password>",roles:[{role:"root",db:"admin"}]});
If the command fails because the user already exists, run the following command:
db.grantRolesToUser('admin',[{role:"<role>"}])
- Stop the primary mongo server.
Create your key file. The following example shows the basic commands:
openssl rand -base64 741 > mongodb-keyfile chmod 600 mongodb-keyfile
Copy the key file to each member of the replica set.
Start each member of the replica set with the following command:
./bin/mongod --replSet "rs0" --dbpath repositories --port 2424 --auth --keyFile <mongodb-keyfile>
On the primary server, connect to the mongo shell and provide the password when prompted:
./bin/mongo localhost:2424/admin -u <admin_user> -p
Run the following commands to verify authentication:
use admin db.auth("<siteRootAdmin>", "<password>");
The shell should return
1
.Initiate the replica set with the following command:
rs.initiate()
Verify your replica set configuration with the following command:
rs.conf()
If the host for your primary server is incorrect, use the following commands to change the host:
cfg = rs.conf() cfg.members[0].host = "mongodb1.example.net:27017" rs.reconfig(cfg)
For each secondary server, run the following command in the primary server's mongo shell:
rs.add("<secondary>:<port>")
Event Monitoring
The built-in event monitoring provider available in Virtualize should not be used with a cluster of load-balanced servers. Since the built-in event monitoring provider only monitors events from the individual server it's on, it cannot have a full picture of all events. As a result, you will need to set up separate monitoring provider and configure the virtual servers in your cluster to use it for event monitoring.
Once you have set up a separate monitoring provider, configure the clustered virtual servers to use it for event monitoring. See "Enabling the Event Monitoring" on the Gaining Visibility into Server Events page for more information about that process.
Additional Information
- CTP and Virtualize Desktop can modify live assets. When nodes communicate through the load balancers, CTP and Virtualize will treat the cluster as a single server. The name of the server sent to CTP (as configured in the Virtualize server or settings.properties) must be the same on all nodes.
- Recording is not supported in a clustered environment. Recording should be performed on your staging infrastructure prior to asset promotion.
- All AUT traffic is sent to the Load Balancer and filtered appropriately using the setting's sticky sessions. See Configuring Source Address Affinity Persistence.
- Ensure that changes are sent to only one server. The load balancer for the Virtualize servers should be configured to send configuration/‘change’ messages to a single node in the cluster. This should be done for the port (defaults are 9080/9443) and/or path (/axis2 SOAP or /soavirt/api/ REST) that handles ‘changes’ sent from the Virtualize desktop or CTP.
- Ensure consistent behavior from stateful assets. The Virtualize Server load balancer must direct all traffic for a single session to a single node.
- Traffic for a stateful asset should only be sent to one node. Traffic should not be run in "first available" mode to ensure that multiple assets do not change the state multiple times.
- If stateful assets are intended to hold their state across sessions, then the nodes need to store the state in a synchronized data source, for example, a data repository. Be aware, however, that there will be latency between the data repository updates and subsequent requests for the updated information.