...
This document assumes that the deployment will include:
- Virtualize server—preferably server, preferably deployed via the war file
- Parasoft Data Repository server
- Parasoft CTP and a supporting database (see Selecting a CTP Database for details)
...
There are two deployment approaches: dynamic infrastructure (Docker image, Azure VM, or AWS VM) or physical static infrastructure.
A dynamic infrastructure is is designed for enabling dynamic, disposable test environments. This means that a test environment can be instantly provisioned from a golden template, used and dirtied, then simply destroyed. There is no need to share test environments or resources across teams or test phases; the exact environment you need is instantly spun up whenever you want it, then destroyed as soon as you’re done with it. Dynamic infrastructures provide advanced flexibility for extreme automation. Moreover, when you need to scale (e.g.for example, for performance testing), you can do that on demand.
With a physical static infrastructure, you have permanent (dedicated) servers. This is useful when long-term scaling is anticipated, and hardware is designated ahead of time for heavy usage. Such an approach is designed for high availability and fault tolerance requirements. If you have such requirements and plan to configure a cluster of Parasoft Virtualize servers behind a load balancer, also see the recommendations in the Virtualize User's Guide Setting Up a Cluster of Virtualize Servers Behind a Load Balancer.
Dynamic Infrastructure (Docker, Azure, AWS) Recommendations
...
The following diagram shows the recommended architecture for a deployment with 3 three Virtualize servers and CTP; note that the "Server n" icons in the lower right corner represent any number of additional servers (as appropriate for your environment).
...
Info | ||
---|---|---|
| ||
If possible, separate Virtualize and Data Repository. This is important for "future proofing" your deployment. This becomes especially important : |
...
- Since the database for CTP will eventually host multiple teams’ artifacts as binaries, it will grow quite large.
- On Virtualize server, the workspaces typically account for the greatest storage allocation. All artifacts and data (including data snapshots) are stored here prior to checking in to source control.
- With Virtualize server, strong horsepower is essential for achieving optimal performance.
- With Virtualize server and Data Repository server machines, increasing the amount of RAM should boost performance—especially performance, especially with Data Repository memory mapping.
- Meeting or exceeding the storage requirements for Data Repository servers is critical. You will be storing multiple versions of the same service and database data—from data from multiple teams , and across multiple environments.
...
- Administrative tasks (such as startup, restart, logging, and debugging) are easier. Also, if you want to run an application "behind the scenes" as a service, you can use
nohup/ init.d/cron
if you have Linux. Windows does not provide an equally simple solution. - Permissions are easier to manage. A service account can be created so that only an admin group can view/modify files such as startup scripts, server.xml, etcand so on.
- Performance is better. With Windows, the core OS consumes a considerable amount of performance. With Linux, the machine can be stripped down to bare bones—allowing bones, allowing Virtualize to run in its own space.
- Automation efforts are simplified thanks to tools like
ssh
andscp
. Parasoft offers a package of scripts for these tasks on Linux.
...