Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DTPDEVEL and version 2022.2

This topic provides high-level information about deploying DTP to Docker and cloud environments. In this section:

Table of Contents
maxLevel2

Download

You can download a ready-to-use DTP container image from Docker Hub: https://hub.docker.com/r/parasoft/dtp.  If you intend on deploying Extension Designer, download the image with Extension Designer.

1

Architecture Considerations

...

You can deploy containerized instances of DTP to dynamic infrastructure, such as Amazon AWS or Microsoft Azure, which enables you to alleviate management overhead associated with maintaining your own host environment. Refer to the documentation for your cloud environment for additional information.

Deploying DTP in Docker

To deploy DTP in Docker, follow the process outlined below.

Create a Docker volume and start DTP

First, you need to create a Docker volume called "parasoft-volume" as shown below.

Code Block
languagetext
docker volume create parasoft-volume

Then start DTP.  Please note the following:

  • Certain conditions must be met for the machineId to remain the same when new containers are started. Those conditions are:
    • The container must be created and started with the root user.
    • The Docker socket must be mounted.
    • The Docker volume "parasoft-volume" created previously must be mounted.
  • DTP_DATA_DIR:/usr/local/parasoft/data
    • Mount persistent volume at DTP_DATA_DIR.

An example of starting DTP with these considerations is shown below. If you are deploying Extension Designer, see Starting Extension Designer below for additional commands you might need to include.

Code Block
languagetext
docker run --name dtp -p 8080:8080 -p 8082:8082 -p 8443:8443 --user root:root -v /var/run/docker.sock:/var/run/docker.sock -v parasoft-volume:/mnt/parasoft -v DTP_DATA_DIR:/usr/local/parasoft/data -d DTP_DOCKER_IMAGE

...

Parasoft has published official Docker images to Docker Hub for your convenience. Full installation instructions are included in the readme with each image. There are two versions available, one with Extension Designer and one without. Follow the link below that best suits your needs:

Deploying DTP in Kubernetes

To deploy DTP in Kubernetes, follow the process outlined below.

Note

Deploying multiple DTP servers in Kubernetes is not supported with this version. Support is limited to a single instance of DTP running in a Kubernetes cluster.

Prerequisites

First, you will need a Kubernetes cluster. After starting the cluster, create the namespace, service account, and permissions required by the DTP pod and related resources. An example of a yaml file that might be used to for this purpose is shown below.

Code Block
languageyml
titleparasoft-permissions.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: parasoft-namespace
---
# Stable access for clients to license server
kind: Service
apiVersion: v1
metadata:
  name: parasoft-service
  namespace: parasoft-namespace
spec:
  selector:
    tag: parasoft-service
  ports:
    - name: https
      port: 443
      protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: parasoft-account
  namespace: parasoft-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: parasoft-namespace-role
  namespace: parasoft-namespace
rules:
- apiGroups:
  - "*"
  resources:
  - "*"
  verbs:
  - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: parasoft-namespace-bind
  namespace: parasoft-namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: parasoft-namespace-role
subjects:
- kind: ServiceAccount
  name: parasoft-account
  namespace: parasoft-namespace

Use your yaml file to create the required namespace, service account, and permissions before creating the DTP environment:

Code Block
languagetext
kubectl create -f parasoft-permissions.yaml

You should see something similar to the output below in your console:

Code Block
languagetext
namespace/parasoft-namespace created
service/parasoft-service created
serviceaccount/parasoft-account created
role.rbac.authorization.k8s.io/parasoft-namespace-role created
rolebinding.rbac.authorization.k8s.io/parasoft-namespace-bind created
Warning

The "parasoft-namespace" namespace defined in the provided configuration is required and we recommend using the "parasoft-permissions.yaml" as it is documented. The service account used by the DTP Pod requires access to the "parasoft-namespace" namespace, therefore if you choose to create a custom permissions configuration that has different names for the resources defined in the provided permissions configuration, then a namespace with the name "parasoft-namespace" must also be created. If this namespace requirement is not met, DTP will treat any license installed as invalid.

Custom Keystore

If you want to set up a custom keystore, you will need to create a configuration map for the ".keystore" and "server.xml" files. The command below creates a configuration map called "keystore-cfgmap" with file mappings for the custom ".keystore" and "server.xml" files. In this example, each file mapping is given a key: "keystore" for the .keystore file and "server-config" for the server.xml file. While giving each file mapping a key is not necessary, it is useful when you don't want the key to be the file name. 

Code Block
languagetext
~$ kubectl create configmap keystore-cfgmap --from-file=keystore=/path/to/.keystore --from-file=server-config=/path/to/server.xml
configmap/keystore-cfgmap created

DTP Setup

To set up DTP, create a yaml file that defines a secret (optional), volume, pod, internal-access service, and external-access service (optional). The secret is used to pull the DTP image from the repository. The pod is set up to run the DTP server and Data Collector in separate containers. Each container is configured with a volume to persist data and a liveness probe, which is the Kubernetes equivalent of a Docker Healthcheck. The internal-access service exposes the DTP pod to other pods, allowing them to communicate via the service name instead of an explicit IP address. The external-access service makes DTP and Data Collector accessible via external clients by allocating ports in the node and mapping them to ports in the pod. An example yaml file that might be used for this purpose is shown below. In the example, an NFS volume is used, but this is not required; use whatever volume type fits your needs.

Code Block
languageyml
titleparasoft-dtp.yaml
apiVersion: v1
kind: Pod
metadata:
  name: dtp
  namespace: parasoft-namespace
  labels:
    app: DTP
spec:
  volumes:
    - name: dtp-data
      nfs:
        server: NFS_SERVER_HOST
        path: /dtp/
# Uncomment section below if you are setting up a custom keystore; you will also need to uncomment out the associated volumeMounts below
#   - name: keystore-cfgmap-volume
#     configMap:
#       name: keystore-cfgmap
  containers:
    - name: dtp-server
      image: DTP_DOCKER_IMAGE
# To inject JVM arguments into the container, specify the "env" property as in the example below, which injects JAVA_CONFIG_ARGS
#     env:
#       - name: JAVA_CONFIG_ARGS
#         value: "-Dparasoft.use.license.v2=true"
      args: ["--run", "dtp"]
      imagePullPolicy: Always
      ports:
        - name: "http-server"
          containerPort: 8080
        - name: "https-server"
          containerPort: 8443
      volumeMounts:
        - mountPath: "/usr/local/parasoft/data"
          name: dtp-data
# Uncomment section below if you are setting up a custom keystore. Note that updates made to these files will not be reflected inside the container once it's been deployed; you will need to restart the container for it to contain any updates.
#       - name: keystore-cfgmap-volume
#         mountPath: "/usr/local/parasoft/dtp/tomcat/conf/.keystore"
#         subPath: keystore
#       - name: keystore-cfgmap-volume
#         mountPath: "/usr/local/parasoft/dtp/tomcat/conf/server.xml"
#         subPath: server-config
# To prevent liveness probe failures on environments with low or overly taxed RAM/CPU, we recommend increasing the timeout seconds
      livenessProbe:
        exec:

...

If you are deploying Extension Designer, add the following port to the Docker run command above: 

Code Block
languagetext
-p 8314:8314

Once the container is started, you will need to modify the reverse proxy settings in the Extension Designer UI to have the correct hostname. Extension Designer does not assume the Docker container ID is the hostname, so without this modification the application may malfunction. In addition, if you have published non-default ports for DTP and/or Extension Designer (for example, -p 8400:8314), then these ports will need to be reflected in the reverse proxy settings as well.

Note that, while modifying the reverse proxy settings is the recommended configuration option, if all default ports are used for DTP and Extension Designer it is possible to avoid having to configure reverse proxy settings and pass in the hostname when running the container for the first time. Keep in mind that the hostname flag is only available with the docker run command.

Example:

Code Block
languagetext
--hostname foobar.example.com 

If you are using a remote MongoDB database, you should configure the necessary environment variables using the -e flag when starting the Docker container.
Example:

Code Block
languagetext
-e DEP_USE_REMOTE_DB=true -e DEP_DB_HOSTNAME=mongo-db -e DEP_DB_PORT=27017 

Set up DTP to connect to your database

After starting the DTP container, initialize DTP_DATA_DIR directory to connect to your database.

Code Block
languagetext
cd DTP_DATA_DIR/lib/thirdparty/

Then download the correct connector driver for your database. Note that "sudo" needs to be used any time volume data is modified externally since it is owned by the root user.  An example of downloading the MySQL connector driver is shown below.  

Code Block
languagetext
sudo wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar

As sudo, modify DTP_DATA_DIR/conf/PSTRootConfig.xml to point to the database server.  By default, it points to a localhost URL for MySQL.

Initialize the DTP database.  An example using MySQL is shown below.

Code Block
languagetext
docker exec dtp cat dtp/grs/db/dtp/mysql/create.sql | docker exec -i mysql mysql -uroot -proot 

Restart the DTP container

Once the changes are completed, restart the DTP container for them to take effect.

Code Block
languagetext
docker container restart dtp

The DTP deployment is now complete and DTP should be accessible via browser now.

Deploying DTP in Kubernetes

To deploy DTP in Kubernetes, follow the process outlined below.

Note

Deploying multiple DTP servers in Kubernetes is not supported with this version. Support is limited to a single instance of DTP running in a Kubernetes cluster.

Prerequisites

First, you will need a Kubernetes cluster. After starting the cluster, create the accounts and namespaces for the DTP pod and related resources. An example of a yaml file that might be used to for this purpose is shown below.

Code Block
languageyml
titleparasoft-permissions.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: parasoft-namespace
---
# Stable access for clients to license server
kind: Service
apiVersion: v1
metadata:
  name: parasoft-service
  namespace: parasoft-namespace
spec:
  selector:
    tag: parasoft-service
  ports:
    - name: https
      port: 443
      protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: parasoft-account
  namespace: parasoft-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: parasoft-namespace-role
  namespace: parasoft-namespace
rules:
- apiGroups:
  - "*"
  resources:
  - "*"
  verbs:
  - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: parasoft-read-role
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - namespaces
  verbs:
  - get
  - read
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: parasoft-read-bind
  namespace: parasoft-namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: parasoft-read-role
subjects:
- kind: ServiceAccount
  name: parasoft-account
  namespace: parasoft-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: parasoft-namespace-bind
  namespace: parasoft-namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: parasoft-namespace-role
subjects:
- kind: ServiceAccount
  name: parasoft-account
  namespace: parasoft-namespace

Use your yaml file to create those accounts and namespaces before creating the DTP environment:

Code Block
languagetext
kubectl create -f parasoft-permissions.yaml
Warning

The "parasoft-namespace" namespace defined in the provided configuration is required and we recommend using the "parasoft-permissions.yaml" as it is documented. The service account used by the DTP Pod requires access to the "parasoft-namespace" namespace, therefore if you choose to create a custom permissions configuration that has different names for the resources defined in the provided permissions configuration, then a namespace with the name "parasoft-namespace" must also be created. If this namespace requirement is not met, DTP will treat any license installed as invalid.

DTP Setup

To set up DTP, create a yaml file that defines a secret (optional), volume, pod, internal-access service, and external-access service (optional). The secret is used to pull the DTP image from the repository. The pod is set up to run the DTP server and Data Collector in separate containers. Each container is configured with a volume to persist data and a liveness probe, which is the Kubernetes equivalent of a Docker Healthcheck. The internal-access service exposes the DTP pod to other pods, allowing them to communicate via the service name instead of an explicit IP address. The external-access service makes DTP and Data Collector accessible via external clients by allocating ports in the node and mapping them to ports in the pod. An example yaml file that might be used for this purpose is shown below. In the example, an NFS volume is used, but this is not required; use whatever volume type fits your needs.

Code Block
languageyml
titleparasoft-dtp.yaml
apiVersion: v1
kind: Pod
metadata:
  name: dtp
  namespace: parasoft-namespace
  labels:
    app: DTP
spec:
  volumes:
    - name: dtp-data
      nfs:
        server: NFS_SERVER_HOST
        path: /dtp/
  containers:
    - name: dtp-server
      image: DTP_DOCKER_IMAGE
      args: ["--run", "dtp"]
      imagePullPolicy: Always
      ports:
        - name: "http-server"
          containerPort: 8080
        - name: "https-server"
          containerPortcommand: 8443
      volumeMounts:
    - healthcheck.sh
   - mountPath: "/usr/local/parasoft/data"
     - --verify
    name: dtp-data
     - livenessProbe:dtp
        execinitialDelaySeconds: 120
        periodSeconds:  command:60
        timeoutSeconds: 30
 - healthcheck.sh
      failureThreshold: 5
    - -name: data--verifycollector
      image: DTP_DOCKER_IMAGE
# To  - dtp
        initialDelaySeconds: 120
  inject JVM arguments into the container, specify the "env" property as in the example below, which injects JAVA_DC_CONFIG_ARGS
#      periodSecondsenv: 30
#       - failureThresholdname: 20
JAVA_DC_CONFIG_ARGS
#    - name: data-collector
   value:   image: DTP_DOCKER_IMAGE
      "-Dcom.parasoft.sdm.dc.traffic.max.length=250000 -Dcom.parasoft.sdm.dc.build.details.to.keep=5"
      args: ["--run", "datacollector", "--no-copy-data"]
      imagePullPolicy: Always
      ports:
        - containerPort: 8082
      volumeMounts:
        - mountPath: "/usr/local/parasoft/data"
          name: dtp-data dtp-data
# To prevent liveness probe failures on environments with low or overly taxed RAM/CPU, we recommend increasing the timeout seconds
      livenessProbe:
        exec:
          command:
          - healthcheck.sh
          - --verify
          - datacollector
        initialDelaySeconds: 30120
        periodSeconds: 60
        timeoutSeconds: 1030
        failureThreshold: 5
# Uncomment section below if using DTP with Extension Designer
#    - name: extension-designer
#      image: DTP_DOCKER_IMAGE
#      args: ["--run", "dtpservices"]
#      imagePullPolicy: Always
#      ports:
#        - containerPort: 8314
#      volumeMounts:
#        - mountPath: "/usr/local/parasoft/data"
#          name: dtp-data
# To prevent liveness probe failures on environments with low or overly taxed RAM/CPU, we recommend increasing the timeout seconds
#     livenessProbe:
#        exec:
#          command:
#          - healthcheck.sh
#          - --verify
#          - dtpservices
#        initialDelaySeconds: 30120
#        periodSeconds: 1060
#       timeoutSeconds: 30
#       failureThreshold: 5
# Uncomment section below if using Extension Designer with an external MongoDB
#      env:
#      - name: DEP_USE_REMOTE_DB
#        value: "true"
#      - name: DEP_DB_HOSTNAME
#        value: "mongodb-hostname" # Put your mongodb hostname here
#      - name: DEP_DB_PORT
#        value: "27017"
  restartPolicy: Always
  serviceAccountName: parasoft-account
  imagePullSecrets:
    - name: YOUR_SECRET
---
apiVersion: v1
kind: Service
metadata:
  name: dtp
  namespace: parasoft-namespace
spec:
  selector:
    app: DTP
  ports:
    - name: "http-server"
      protocol: TCP
      port: 8080
      targetPort: 8080
    - name: "data-collector"
      protocol: TCP
      port: 8082
      targetPort: 8082
    - name: "https-server"
      protocol: TCP
      port: 8443
      targetPort: 8443
# Uncomment section below if using DTP with Extension Designer
#    - name: "extension-designer"
#      protocol: TCP
#      port: 8314
#      targetPort: 8314
---
apiVersion: v1
kind: Service
metadata:
  name: dtp-external
  namespace: parasoft-namespace
spec:
  type: NodePort
  selector:
    app: DTP
  ports:
    - port: 8080
      name: HTTP_PORT_NAME
      nodePort: XXXXX
    - port: 8082
      name: DC_PORT_NAME
      nodePort: XXXXX
    - port: 8443
      name: HTTPS_PORT_NAME
      nodePort: XXXXX
# Uncomment section below if using DTP with Extension Designer
#    - port: 8314
#      name: EXTENSION_DESIGNER_PORT_NAME
#      nodePort: XXXXX
 
# SERVICE CONFIG NOTES:
# 'name' can be whatever you want
# 'nodePort' must be between 30000-32768
# 'spec.selector' must match 'metadata.labels' in pod config

...

This will initialize the contents of the persistent volume, however, additional setup is required for the DTP and Data Collector containers to run correctly.Collector containers to run correctly.

If you injected JVM arguments into a container and want to verify their status, run the following command:

Code Block
languagetext
kubectl exec <POD_NAME> -c <CONTAINER_NAME> -- printenv

Setup DTP to connect to your database

Download and install the relevant JDBC driver in the persistent volume that is mounted to the DTP data directory.

...

Note

If you are using DTP with Extension Designer, after you have completed the initial setup you will need to update the Reverse Proxy settings in Extension Designer to reflect the expected hostname and the exposed ports for accessing DTP and Extension Designer.

Custom Truststore

Using a custom truststore in Kubernetes environments is similar to using a custom keystore as described above. Adjust the directions for using a custom keystore as appropriate. Note that the truststore location is /usr/local/parasoft/dtp/jre/lib/security/cacerts.