You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

The following topic outlines Parasoft's recommendations for a production-grade deployment of Parasoft Virtualize alongside Parasoft Data Repository and Parasoft Continuous Testing Platform (CTP). Sections in this topic include:

Assumptions

This document assumes that the deployment will include:

  • Virtualize server, preferably deployed via the war file
  • Parasoft Data Repository server
  • Parasoft CTP and a supporting database (see Selecting a CTP Database for details)

Deployment Approaches

There are two deployment approaches: dynamic infrastructure (Docker image or Azure VM) or physical static infrastructure.

Dynamic infrastructures are designed for enabling dynamic, disposable test environments. This means that a test environment can be instantly provisioned from a golden template, used and dirtied, then simply destroyed. There is no need to share test environments or resources across teams or test phases; the exact environment you need is instantly spun up whenever you want it, then destroyed as soon as you’re done with it. Dynamic infrastructures provide advanced flexibility for extreme automation. Moreover, when you need to scale (for example, for performance testing), you can do that on demand. 

With a physical static infrastructure, you have permanent (dedicated) servers. This is useful when long-term scaling is anticipated and hardware is designated ahead of time for heavy usage. Such an approach is designed for high availability and fault tolerance requirements. If you have such requirements and plan to configure a cluster of Parasoft Virtualize servers behind a load balancer, also see the recommendations at Setting Up a Cluster of Virtualize Servers Behind a Load Balancer.

Dynamic Infrastructure Recommendations

Dynamic infrastructures use either Docker Images or Microsoft Azure VMs.
For Docker, we recommend:

  • Each image should contain a single Tomcat with CTP, Virtualize server (war file deployment), and a Data Repository server.
  • The minimum size machine specs are 4 core and 16 GB RAM, but the recommendation is 8 cores, 32 GB RAM.
  • We do not recommend more than 2-4 images on the same host OS because they will compete for resources (especially network resources). Having more than 4 images on the same host OS makes it difficult to diagnose any issues encountered.

For Azure, we recommend:

  • A Virtual Machine size of DS2_V2 or bigger.
  • An Azure Virtual Machine for Parasoft Service Virtualization (On-Demand or BYOL).

For AWS, we recommend:

  • A Virtual Machine size of t2.medium or bigger.

Physical Static Infrastructure Recommendations

The following diagram shows the recommended architecture for a deployment with three Virtualize servers and CTP. Note that while three Virtualize servers are shown here, you can have as many as you need.

Desktop users should develop assets using their local Virtualize desktop, then connect to a Virtualize Staging Server to test the assets in the staging environment. They will be able to monitor events on the Staging Server and debug their assets then, when a virtual asset is ready, check in the changes to the source control management (SCM) system.

Assets on Production or Performance servers should be updated through the SCM. This ensures that the behavior and performance of assets running on those servers is not affected by development of virtual assets and helps keep the product/performance servers stable. Performance servers should be configured based on the principles outlined in Performance Tuning a Virtualize Server. Updates of assets from the SCM could be scheduled or on demand, based on what best fits the current needs. For example, deployments in Kubernetes might use an initContainer to pull from the SCM while those using a physical server might have a job that runs on demand from an automation server.

Using a single server for both staging and production has inherent risks. While it can be done, it is not recommended for several reasons. For example, a desktop user could redeploy an asset while a test is using the server, causing failures, or monitor an asset during a load test, causing a decrease in throughput. Without a separation of staging and production environments, it's possible that a desktop user's development of assets could negatively impact automated tests or load testing.

It is recommended to minimize the network latency between a Virtualize server and a Data Repository server as much as possible. Network latency between the processes can negatively impact the performance of the Virtualize server under load. The most common way this is done is to install them on the same machine, or two machines that are co-located as much as possible (for example, same data center region). The Data Repository server can consume a large amount of memory, so when installing it on the same machine as a Virtualize server, it is advised that the machine have at least 32GB RAM.

We recommend the following hardware for the Virtualize, Data Repository, and CTP server machines:

Virtualize

  • CPU: 8 core
  • RAM: 32 GB or higher
  • Dedicated disk space: 300 GB or higher

Data Repository

  • CPU: 4 core
  • RAM: 32 GB or higher
  • Dedicated disk space: 300 GB or higher

CTP

  • CPU: 4 core
  • RAM: 8 GB
  • Dedicated disk space: 150 GB

Notes

  • Since the database for CTP will eventually host multiple teams’ artifacts as binaries, it will grow quite large.
  • On Virtualize server, the workspaces typically account for the greatest storage allocation. All artifacts and data (including data snapshots) are stored here prior to checking in to source control.
  • With Virtualize server, strong horsepower is essential for achieving optimal performance.
  • With Virtualize server and Data Repository server machines, increasing the amount of RAM should boost performance, especially with Data Repository memory mapping.
  • Meeting or exceeding the storage requirements for Data Repository servers is critical. You will be storing multiple versions of the same service and database data from multiple teams, across multiple environments.

Operating System

We recommend Linux over Windows for Parasoft deployments because:

  • Administrative tasks (such as startup, restart, logging, debugging) are easier. Also, if you want to run an application "behind the scenes" as a service, you can use nohup/ init.d/cron if you have Linux. Windows does not provide an equally simple solution.
  • Permissions are easier to manage. A service account can be created so that only an admin group can view/modify files such as startup scripts, server.xml, and so on.
  • Performance is better. With Windows, the core OS consumes a considerable amount of performance. With Linux, the machine can be stripped down to bare bones allowing Virtualize to run in its own space.
  • Automation efforts are simplified thanks to tools like ssh and scp. Parasoft offers a package of scripts for these tasks on Linux.

Selecting a CTP Database 

CTP supports Oracle, HyperSQL, and MySQL, but we strongly recommend Oracle or HyperSQL over MySQL; in a nutshell, our recommendation is: 

Oracle >= HyperSQL > MySQL

Oracle and HSQLDB perform equally well, but MySQL is difficult and challenging to troubleshoot. If you do not plan on clustering the CTP database and you have sufficient space, we recommend HyperSQL. 

Starting Up Cloud-based Dynamic Infrastructure Deployments

When VMs are deployed in the cloud through cloud service providers, such as Microsoft Azure and Amazon AWS, machine IDs may change as the VM is shut down and restarted. Use the following flag when starting Parasoft products to ensure that the machine ID remains stable when VMs are restarted on cloud platforms:

-Dparasoft.cloudvm=true 
  • No labels