...
- Stop all of the Data Repositories you want to become part of the replica set.
Start the primary repository server:
Code Block language powershell ./bin/mongod --port 2424 --dbpath repositories
If the "repositories" directory does not exist, you should create it.
Connect to the mongo shell and create the admin database.
Code Block language powershell ./bin/mongo --port 2424 use admin
Run the following command to create admin users:
Code Block language powershell db.createUser({user:"<username>",pwd:"<password>",roles:[{role:"root",db:"admin"}]});
If the command fails because the user already exists, run the following command:
Code Block language powershell db.grantRolesToUser('admin',[{role:"<role>"}])
- Stop the primary mongo server.
Create your key file. The following example shows the basic commands:
Code Block language powershell openssl rand -base64 741 > mongodb-keyfile chmod 600 mongodb-keyfile
Copy the key file to each member of the replica set.
Start each member of the replica set with the following command:
Code Block language powershell ./bin/mongod --replSet "rs0" --dbpath repositories --port 2424 --auth --keyFile <mongodb-keyfile>
On the primary server, connect to the mongo shell and provide the password when prompted:
Code Block language powershell ./bin/mongo localhost:2424/admin -u <admin_user> -p
Run the following commands to verify authentication:
Code Block language powershell use admin db.auth("<siteRootAdmin>", "<password>");
The shell should return
1
.Initiate the replica set with the following command:
Code Block language powershell rs.initiate()
Verify your replica set configuration with the following command:
Code Block language powershell rs.conf()
If the host for your primary server is incorrect, use the following commands to change the host:
Code Block language powershell cfg = rs.conf() cfg.members[0].host = "mongodb1.example.net:27017" rs.reconfig(cfg)
For each secondary server, run the following command in the primary server's mongo shell:
Code Block language powershell rs.add("<secondary>:<port>")
Event Monitoring
The built-in event monitoring provider available in Virtualize should not be used with a cluster of load-balanced servers. Since the built-in event monitoring provider only monitors events from the individual server it's on, it cannot have a full picture of all events. As a result, you will need to set up separate monitoring provider and configure the virtual servers in your cluster to use it for event monitoring.
Once you have set up a separate monitoring provider, configure the clustered virtual servers to use it for event monitoring. See "Enabling the Event Monitoring" on the Gaining Visibility into Server Events page for more information about that process.
Additional Information
- CTP and Virtualize Desktop can modify live assets. When nodes communicate through the load balancers, CTP and Virtualize will treat the cluster as a single server. The name of the server sent to CTP (as configured in the Virtualize server or localsettingsor settings.properties) must be the same on all nodes.
- Recording is not supported in a clustered environment. Recording should be performed on your staging infrastructure prior to asset promotion.
- All AUT traffic is sent to the Load Balancer and filtered appropriately using the setting's sticky sessions (see . See Configuring Source Address Affinity Persistence).
- Ensure that changes are sent to only one server. The load balancer for the Virtualize servers should be configured to send configuration/‘change’ messages to a single node in the cluster. This should be done for the port (defaults are 9080/9443) and/or path (/axis2 SOAP or /soavirt/api/ REST) that handles ‘changes’ sent from the Virtualize desktop or CTP.
- Ensure consistent behavior from stateful assets. The Virtualize Server load balancer must direct all traffic for a single session to a single node.
- Traffic for a stateful asset should only be sent to one node. Traffic should not be run in "first available" mode to ensure that multiple assets do not change the state multiple times.
- If stateful assets are intended to hold their state across sessions, then the nodes need to store the state in a synchronized data source, i.e., a data repository. Be aware, however, that there will be latency between the data repository updates and subsequent requests for the updated information.