This topic provides recommendations on how to configure a cluster of Parasoft Virtualize servers to achieve benefits such as:
Sections in this topic include:
In all circumstances, you need to configure your load balancer to go to each Virtualize server.
If you are using Parasoft CTP (or any other tool that provisions/copies/modifies/… assets through the Virtualize REST API), you will also need to perform the additional configuration outlined in the following sections.
Note that the following instructions apply to any load balancer. F5 Local Traffic Manager (LTM) examples are provided to give you an idea of how these general guidelines might be carried out on a specific load balancer.
This section explains how to configure source affinity persistence for Parasoft CTP. The same principles can also be applied to configure other any other tool that provisions/copies/modifies/… assets through the Virtualize REST API.
Source address affinity persistence will ensure that all calls from Parasoft CTP go to the same node. This session affinity is also known as "sticky sessions."
You'll want to enable source address affinity persistence (session stickiness) for the following ports:
With F5 Local Traffic Manager (LTM), source address affinity persistence is configured by enabling
"Match Across Services" and "Match Across Virtual Servers." For example, if you wanted to implement source address affinity persistence with F5s LTM, you could use the default source_addr profile or create a custom profile. The following table shows the settings and values for the default source_addr profile.
Setting | Description | Default Value |
---|---|---|
Name | Defines a unique name for the profile. Required. | No default value |
Persistence Type | Defines the type of persistence profile. Required. | Source Address Affinity |
Match Across Services | Indicates that all persistent connections from a client IP address which go to the same virtual IP address should also go to the same node. | Enabled |
Match Across Virtual Servers | Indicates that all persistent connections from the same client IP address should go to the same node. | Enabled |
Match Across Pools | Indicates that the LTM system can use any pool that contains this persistence entry. | Disabled |
Timeout | Indicates the number of seconds at which a persistence entry times out. | 180 |
Mask | Defines the mask the LTM system should use before matching with an existing persistence entry. | 0.0.0.0 |
Map Proxies | Enables or disables proxy mapping. | Enabled |
Priority-based member activation ensures that all calls from any origin will go to the main machine first (if possible). To configure this, set the first machine as the highest priority. For example, the following is a sample pool configuration file from F5 documentation for Local Traffic Manager:
pool my_pool { lb_mode fastest min active members 2 member 10.12.10.7:80 priority 3 member 10.12.10.8:80 priority 3 member 10.12.10.9:80 priority 3 member 10.12.10.4:80 priority 2 member 10.12.10.5:80 priority 2 member 10.12.10.6:80 priority 2 member 10.12.10.1:80 priority 1 member 10.12.10.2:80 priority 1 member 10.12.10.3:80 priority 1 } |
Before you start configuring Virtualize for load balancing, ensure that the load balancer is configured so that ‘Changes’ are propagated to only one node.
Next, do the following to ensure that the changes are then synchronized to the other nodes via the shared file system (SAN and/or NAS):
From the command-line (for a headless launch): Add refresh.enabled=true
in the /.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.core.resources.prefs
file, then restart the server. For example:
eclipse.preferences.version=1 version=1 refresh.enabled=true |
-J-Dparasoft.auto.deploy.new=false
Note that deployments in the Virtualize cluster will eventually be consistent. When a request deploy is sent, a single server processes the request and then returns a response. The file system will notify the other servers in the cluster to make the changes required for consistency. In some cases, this may take several seconds. As a result, there may be a brief time when some servers are not yet aware of a deployment that happened on another server.
Parasoft Data Repositories, which are based on MongoDB, should be configured for clustering by identifying a primary Data Repository (the one that has the data you want replicated) and creating a replicate set. To do this, follow this procedure (taken and modified from the MongoDB documentation):
Create the key file each member of the replica set will use to authenticate servers to each other.
To generate pseudo-random data to use for a keyfile, issue the following openssl command:
openssl rand -base64 741 > mongodb-keyfile chmod 600 mongodb-keyfile |
You may generate a key file using any method you choose. Always ensure that the password stored in the key file is long and contains a high amount of entropy. Using openssl in this manner helps generate such a key.
Beginning with your primary data repository, start each member of the replica set with the –keyFile and –replSet command-line options (to specify the key file and the name of the replica set, respectively). To add these options, edit the Data Repository's server.sh or server.bat script file (at the line that calls mongodb). For example:
mongod --keyFile /mysecretdirectory/mongodb-keyfile --replSet "rs0" |
Connect to the primary Data Repository and authenticate as the admin user, created by the M_USER variable in the server.sh or server.bat script:
"rs.add("mongodb1.example.net:2424") |
On the primary data repository, initiate the replica set using rs.initiate():
rs.initiate() |
This initiates a set that consists of the current member and that uses the default replica set configuration.
On the primary data repository, verify the initial replica set configuration by using rs.conf()
to display the replica set configuration object:
rs.conf() |
The replica set configuration object should resemble the following:
{ "_id" : "rs0", "version" : 1, "members" : [ { "_id" : 1, "host" : "mongodb0.example.net:27017" } ] } |
Add the remaining replica set members to the replica set with the rs.add() method. You must be connected to the primary data repository to add members to a replica set.
rs.add() can, in some cases, trigger an election. If the Data Repository you are connected to becomes a secondary, you need to connect the mongo shell to the new primary to continue adding new replica set members. Use rs.status() to identify the primary in the replica set.
The following example adds two members:
rs.add("mongodb1.example.net") rs.add("mongodb2.example.net") |
When complete, you have a fully functional replica set. The new replica set will elect a primary.
Check the status of the replica set using the rs.status() operation:
rs.status() |