You can set up a cluster of Parasoft Virtualize servers to achieve horizontal scaling and fault tolerance. Horizontal scaling is commonly used to optimize performance testing and cloud-based infrastructures for continuous integration and continuous delivery. Fault tolerance is commonly used to increase uptime and/or facilitate disaster recovery for business-critical backend systems.
In this section
Configure your load balancer to route loads to each Virtualize server. If you are using Parasoft CTP or any other tool that interacts with assets through the Virtualize REST API, you will need to perform the additional configuration outlined in the following sections.
The following instructions apply to any load balancer, but we are using F5 Local Traffic Manager (LTM) as an example in order to provide general guidelines on load balancer configuration.
This section describes how to configure source affinity persistence for Parasoft CTP. The same principles can also be applied to configure other any other tool that interacts with assets through the Virtualize REST API.
Source address affinity persistence ensures that all calls from CTP are routed to the same node. Session affinity is sometime called "sticky sessions."
Enable source address affinity persistence for the following ports:
With F5 Local Traffic Manager (LTM), source address affinity persistence is configured by enabling the Match Across Services and Match Across Virtual Servers options. You could, for example, use the default source_addr profile or create a custom profile to enable source address affinity persistence. The following table shows the settings and values for the default source_addr profile.
Setting | Description | Default Value |
---|---|---|
Name | Defines a unique name for the profile. Required. | No default value |
Persistence Type | Defines the type of persistence profile. Required. | Source Address Affinity |
Match Across Services | Indicates that all persistent connections from a client IP address which go to the same virtual IP address should also go to the same node. | Enabled |
Match Across Virtual Servers | Indicates that all persistent connections from the same client IP address should go to the same node. | Enabled |
Match Across Pools | Indicates that the LTM system can use any pool that contains this persistence entry. | Disabled |
Timeout | Indicates the number of seconds at which a persistence entry times out. | 180 |
Mask | Defines the mask the LTM system should use before matching with an existing persistence entry. | 0.0.0.0 |
Map Proxies | Enables or disables proxy mapping. | Enabled |
Priority-based member activation ensures that all calls from any origin will go to the main machine first (if possible). To configure this, set the first machine as the highest priority. For example, the following is a sample pool configuration file from F5 documentation for Local Traffic Manager:
pool my_pool { lb_mode fastest min active members 2 member 10.12.10.7:80 priority 3 member 10.12.10.8:80 priority 3 member 10.12.10.9:80 priority 3 member 10.12.10.4:80 priority 2 member 10.12.10.5:80 priority 2 member 10.12.10.6:80 priority 2 member 10.12.10.1:80 priority 1 member 10.12.10.2:80 priority 1 member 10.12.10.3:80 priority 1 } |
Ensure that the load balancer is configured so that changes are propagated to only one node before configuring Virtualize for load balancing.
git fetch
command as a post commit trigger to synchronize sharing.Add the refresh.enabled=true
property, e.g.:
eclipse.preferences.version=1 version=1 refresh.enabled=true |
Restart the server.
-J-Dparasoft.auto.deploy.new=false
Deployments in the Virtualize cluster will eventually synchronize. When a request deploy is sent, a single server processes the request and then returns a response. The file system will notify the other servers in the cluster to make the changes required for consistency. In some cases, this may take several seconds. As a result, there may be a brief time when some servers are not yet aware of a deployment that happened on another server.
Parasoft Data Repositories, which are based on MongoDB, should be configured for clustering by identifying a primary Data Repository (the one that has the data you want replicated) and creating a replicate set. To do this, follow this procedure (taken and modified from the MongoDB documentation):
Create the key file each member of the replica set will use to authenticate servers to each other.
To generate pseudo-random data to use for a keyfile, issue the following openssl command:
openssl rand -base64 741 > mongodb-keyfile chmod 600 mongodb-keyfile |
You may generate a key file using any method you choose. Always ensure that the password stored in the key file is long and contains a high amount of entropy. Using openssl in this manner helps generate such a key.
Beginning with your primary data repository, start each member of the replica set with the –keyFile and –replSet command-line options (to specify the key file and the name of the replica set, respectively). To add these options, edit the Data Repository's server.sh or server.bat script file (at the line that calls mongodb). For example:
mongod --keyFile /mysecretdirectory/mongodb-keyfile --replSet "rs0" |
Connect to the primary Data Repository and authenticate as the admin user, created by the M_USER variable in the server.sh or server.bat script:
"rs.add("mongodb1.example.net:2424") |
On the primary data repository, initiate the replica set using rs.initiate():
rs.initiate() |
This initiates a set that consists of the current member and that uses the default replica set configuration.
On the primary data repository, verify the initial replica set configuration by using rs.conf()
to display the replica set configuration object:
rs.conf() |
The replica set configuration object should resemble the following:
{ "_id" : "rs0", "version" : 1, "members" : [ { "_id" : 1, "host" : "mongodb0.example.net:27017" } ] } |
Add the remaining replica set members to the replica set with the rs.add() method. You must be connected to the primary data repository to add members to a replica set.
rs.add() can, in some cases, trigger an election. If the Data Repository you are connected to becomes a secondary, you need to connect the mongo shell to the new primary to continue adding new replica set members. Use rs.status() to identify the primary in the replica set.
The following example adds two members:
rs.add("mongodb1.example.net") rs.add("mongodb2.example.net") |
When complete, you have a fully functional replica set. The new replica set will elect a primary.
Check the status of the replica set using the rs.status() operation:
rs.status() |