Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space FUNCTDEV and version SOAVIRT_9.10.7_CTP_3.1.3

This topic provides recommendations on how to configure You can set up a cluster of Parasoft Virtualize servers to achieve benefits such as:

...

horizontal scaling and fault tolerance. Horizontal scaling is commonly used to optimize performance testing and cloud-based infrastructures for

...

continuous integration and continuous delivery. Fault tolerance is commonly used to increase uptime and/or facilitate disaster recovery for business-critical backend systems.

Sections in this topic include:In this section

Table of Contents
maxLevel1

Prerequisites

  • Multiple Virtualize servers, all with Two or more Virtualize servers. The servers must be the exact same version (, including the same service pack , (if applicable) and all be licensed for Virtualize command line (virtualizecli)
  • A shared file system
  • Multiple Two or more servers to host multiple Virtualize instances

...

  • two or more instances of Virtualize.
  • A load balancer that supports source address affinity persistence
  • Access to a source control system 

Load Balancer Configuration

Configure

...

In all circumstances, you need to configure your load balancer to go route loads to each Virtualize server. If you are using Parasoft CTP ( or any other tool that provisions/copies/modifies/… interacts with assets through the Virtualize REST API), you will also need to perform the additional configuration outlined in the following sections.

Note that the The following instructions apply to any load balancer. , but we are using F5 Local Traffic Manager (LTM) examples are provided to give you an idea of how these general guidelines might be carried out on a specific load balanceras an example in order to provide general guidelines on load balancer configuration.

Configuring Source Address Affinity Persistence

This section explains describes how to configure source affinity persistence for Parasoft CTP. The same principles can also be applied to configure other any other tool that provisions/copies/modifies/… interacts with assets through the Virtualize REST API.

Source address affinity persistence will ensure ensures that all calls from Parasoft CTP go are routed to the same node. This session Session affinity is also known as sometime called "sticky sessions."

You'll want to enable Enable source address affinity persistence (session stickiness) for the following ports:

  • 9080 - Virtualize default HTTP connector (Required)
  • 9443 - Virtualize default SSL connector (Required)
  • 2424 - Data Repository (Required if a Data Repository exists on this node)
  • 9617 – The built-in event monitor provider service (Active MQ)
  • 9618 – The built-in server hit statistics provider service
  • 9619 – The built-in JDBC provider service

With F5 Local Traffic Manager (LTM), source address affinity persistence is configured by enabling 
"enabling the Match Across Services" and " and Match Across Virtual Servers." For example, if you wanted to implement source address affinity persistence with F5s LTM, you could  options. You could, for example, use the default source_addr profile or create a custom profile to enable source address affinity persistence. The following table shows the settings and values for the default source_addr profile.

Scroll Table Layout
sortDirectionASC
repeatTableHeadersdefault
widths20%,60%,20%
sortByColumn1
sortEnabledfalse
cellHighlightingtrue

SettingDescription Default Value
NameDefines a unique name for the profile. Required.No default value
Persistence TypeDefines the type of persistence profile. Required.Source Address Affinity
Match Across ServicesIndicates that all persistent connections from a client IP address which go to the same virtual IP address should also go to the same node.Enabled
Match Across Virtual ServersIndicates that all persistent connections from the same client IP address should go to the same node.Enabled
Match Across PoolsIndicates that the LTM system can use any pool that contains this persistence entry.Disabled
TimeoutIndicates the number of seconds at which a persistence entry times out.180
MaskDefines the mask the LTM system should use before matching with an existing persistence entry.0.0.0.0
Map ProxiesEnables or disables proxy mapping.Enabled

Using Priority-based Member Activation

...

Code Block
pool my_pool {
         lb_mode fastest
         min active members 2
member 10.12.10.7:80 priority 3
member 10.12.10.8:80 priority 3
member 10.12.10.9:80 priority 3
member 10.12.10.4:80 priority 2
member 10.12.10.5:80 priority 2
member 10.12.10.6:80 priority 2
member 10.12.10.1:80 priority 1
member 10.12.10.2:80 priority 1
member 10.12.10.3:80 priority 1
}

...

Virtualize Configuration

Before you start configuring Virtualize for load balancing, ensure Ensure that the load balancer is configured so that ‘Changes’ changes are propagated to only one node.Next, do the following to ensure that the changes are then synchronized to the other nodes via the shared file system (SAN and/or NAS):node before configuring Virtualize for load balancing. 

  1. Share the VirtualAssets project and all of its associated content (.pmpdd, .pvadd, .pjcdd, VirtualAssets.xml, .git, etc.) across the cluster. A typical setup would be to mount an NFS folder as the VirtualAssets project in each node of the cluster.
    Image Removed
    We recommend using a git fetch command as a post commit trigger to synchronize sharing.
  2. Enable refreshing using native hooks or polling:
    • From the GUI: Navigate to Window> Preference> General> Workspace, check Refresh 
      1. Choose Window> Preference> General> Workspace from the Virtualize main menu.
      2. Enable the Refresh using native hooks or polling
      , then
      1. option and restart the server.
        Image Modified
    • From the command - line (for a headless launch):
      1. Open the org.eclipse.core.resources.prefs file in the <INSTALL>
      : Add refresh.enabled=true in the 
      1. /.metadata/.plugins/org.eclipse.core.runtime/.settings/
      org.eclipse.core.resources.prefs
      1. directory.
      2. Add the refresh.enabled=true property, e.g.

       file, then restart the server. For example
      1. :

        Code Block
        eclipse.preferences.version=1 
        version=1
        refresh.enabled=true
    Put the servers in "Cluster Mode":
      1. Restart the server.

  3. Add the following Java option to your startup command on all nodes in the cluster to put the server

    -J-Dparasoft.auto.deploy.new=false

  4. Restart your servers.

Note that deployments Deployments in the Virtualize cluster will eventually be consistentsynchronize. When a request deploy is sent, a single server processes the request and then returns a response. The file system will notify the other servers in the cluster to make the changes required for consistency. In some cases, this may take several seconds. As a result, there may be a brief time when some servers are not yet aware of a deployment that happened on another server.

...

  1. Stop all of the Data Repositories you want to become part of the replica set.
  2. Create the key file each member of the replica set will use to authenticate servers to each other.
    To generate pseudo-random data to use for a keyfile, issue the following openssl command:

    Code Block
    openssl rand -base64 741 > mongodb-keyfile 
    chmod 600 mongodb-keyfile

    You may generate a key file using any method you choose. Always ensure that the password stored in the key file is long and contains a high amount of entropy. Using openssl in this manner helps generate such a key.

  3. Copy the mongodb-keyfile key file to each member of the replica set. Set the permissions of these files to 600 so that only the owner of the file can read or write this file to prevent other users on the system from accessing the shared secret.
  4. Beginning with your primary data repository, start each member of the replica set with the –keyFile and –replSet command-line options (to specify the key file and the name of the replica set, respectively). To add these options, edit the Data Repository's server.sh or server.bat script file (at the line that calls mongodb).  For example:

    Code Block
    mongod --keyFile /mysecretdirectory/mongodb-keyfile --replSet "rs0"
  5. Connect to the primary Data Repository and authenticate as the admin user, created by the M_USER variable in the server.sh or server.bat script:

    Code Block
    "rs.add("mongodb1.example.net:2424")
  6. (Optional) If you want to increase the write safety of the replica set, modify the primary data repository's write concerns. With the default setting, the client returns when one member acknowledges the write; you can change this so that a majority must acknowledge the write. See the MongoDB documentation for details.
  7. On the primary data repository, initiate the replica set using rs.initiate():

    Code Block
    rs.initiate()

    This initiates a set that consists of the current member and that uses the default replica set configuration.

  8. On the primary data repository, verify the initial replica set configuration by using rs.conf() to display the replica set configuration object:

    Code Block
    rs.conf() 

    The replica set configuration object should resemble the following:

    Code Block
    {
        "_id" : "rs0",
        "version" : 1,
        "members" : [
           {
               "_id" : 1,
               "host" : "mongodb0.example.net:27017"
           }
         ]
    }
  9. Add the remaining replica set members to the replica set with the rs.add() method. You must be connected to the primary data repository to add members to a replica set.
    rs.add() can, in some cases, trigger an election. If the Data Repository you are connected to becomes a secondary, you need to connect the mongo shell to the new primary to continue adding new replica set members. Use rs.status() to identify the primary in the replica set.
    The following example adds two members:

    Code Block
    rs.add("mongodb1.example.net")
    rs.add("mongodb2.example.net")

    When complete, you have a fully functional replica set. The new replica set will elect a primary.

  10. Check the status of the replica set using the rs.status() operation:

    Code Block
    rs.status()

Tips and Tricks

Provisioning

Additional Information

  • CTP and Virtualize Desktop can modify live assets. When nodes talk communicate through the load balancers, CTP and Virtualize will treat the cluster as a single server. Note that the The name of the server sent to CTP (as configured in the Virtualize server or localsettings.properties) must be the same on all nodes.
  • Recording is not supported in a clustered environment. Recording should be performed on your staging infrastructure prior to asset promotion.

Asset Promotion

  • Use source control to store ‘production grade’ assets and versioning information. Check out assets into the Shared File System for initial node configuration.

...

  • All AUT traffic is sent to the Load Balancer and filtered appropriately using the settings mentioned in Setting Up a Cluster of Virtualize Servers Behind a Load Balancer.sticky sessions.
  • Ensure To ensure that changes are sent to only one server, the . The load balancer for the Virtualize servers should be configured to send configuration/‘change’ messages to a single node in the cluster using the techniques defined in Setting Up a Cluster of Virtualize Servers Behind a Load Balancer. This should be done for the port (defaults are 9080/9443) and/or path (/axis2 SOAP or /soavirt/api/ REST) that handles ‘changes’ sent from the Virtualize desktop or CTP.
  • To ensure Ensure consistent behavior from stateful assets, the . The Virtualize Server load balancer must direct all traffic for a single session to a single node. Note that: 
  • Traffic for a stateful asset should only be sent to one node.
  • In other words, it
  • Traffic should not be run in "first available" mode to ensure that multiple assets do not change the state multiple times.
  • If stateful assets are intended to hold their state across sessions, then the nodes
  • will
  • need to store the state in a synchronized data source,
  • (
  • i.e., a data repository
  • ).
  • . Be aware, however, that there will be latency between the data repository updates and subsequent requests for the updated information.