The "parasoft-namespace" namespace defined in the provided configuration is required and we recommend using the "parasoft-permissions.yaml" as it is documented. The service account used by the DTP Pod requires access to the "parasoft-namespace" namespace, therefore if you choose to create a custom permissions configuration that has different names for the resources defined in the provided permissions configuration, then a namespace with the name "parasoft-namespace" must also be created. If this namespace requirement is not met, DTP will treat any license installed as invalid.
To deploy DTP in Kubernetes, follow the process outlined below.
Deploying multiple DTP servers in Kubernetes is not supported with this version. Support is limited to a single instance of DTP running in a Kubernetes cluster.
Prerequisites
First, you will need a Kubernetes cluster. After starting the cluster, create the namespace, service account, and permissions required by the DTP pod and related resources. An example of a yaml file that might be used to for this purpose is shown below.
apiVersion: v1 kind: Namespace metadata: name: parasoft-namespace --- # Stable access for clients to license server kind: Service apiVersion: v1 metadata: name: parasoft-service namespace: parasoft-namespace spec: selector: tag: parasoft-service ports: - name: https port: 443 protocol: TCP --- apiVersion: v1 kind: ServiceAccount metadata: name: parasoft-account namespace: parasoft-namespace --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: parasoft-namespace-role namespace: parasoft-namespace rules: - apiGroups: - "*" resources: - "*" verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: parasoft-namespace-bind namespace: parasoft-namespace roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: parasoft-namespace-role subjects: - kind: ServiceAccount name: parasoft-account namespace: parasoft-namespace
Use your yaml file to create the required namespace, service account, and permissions before creating the DTP environment:
kubectl create -f parasoft-permissions.yaml
You should see something similar to the output below in your console:
namespace/parasoft-namespace created service/parasoft-service created serviceaccount/parasoft-account created role.rbac.authorization.k8s.io/parasoft-namespace-role created rolebinding.rbac.authorization.k8s.io/parasoft-namespace-bind created
Custom Keystore
If you want to set up a custom keystore, you will need to create a configuration map for the ".keystore" and "server.xml" files. The command below creates a configuration map called "keystore-cfgmap" with file mappings for the custom ".keystore" and "server.xml" files. In this example, each file mapping is given a key: "keystore" for the .keystore file and "server-config" for the server.xml file. While giving each file mapping a key is not necessary, it is useful when you don't want the key to be the file name.
~$ kubectl create configmap keystore-cfgmap --from-file=keystore=/path/to/.keystore --from-file=server-config=/path/to/server.xml configmap/keystore-cfgmap created
DTP Setup
To set up DTP, create a yaml file that defines a secret (optional), volume, pod, internal-access service, and external-access service (optional). The secret is used to pull the DTP image from the repository. The pod is set up to run the DTP server and Data Collector in separate containers. Each container is configured with a volume to persist data and a liveness probe, which is the Kubernetes equivalent of a Docker Healthcheck. The internal-access service exposes the DTP pod to other pods, allowing them to communicate via the service name instead of an explicit IP address. The external-access service makes DTP and Data Collector accessible via external clients by allocating ports in the node and mapping them to ports in the pod. An example yaml file that might be used for this purpose is shown below. In the example, an NFS volume is used, but this is not required; use whatever volume type fits your needs.
apiVersion: v1 kind: Pod metadata: name: dtp namespace: parasoft-namespace labels: app: DTP spec: volumes: - name: dtp-data nfs: server: NFS_SERVER_HOST path: /dtp/ # Uncomment section below if you are setting up a custom keystore; you will also need to uncomment out the associated volumeMounts below # - name: keystore-cfgmap-volume # configMap: # name: keystore-cfgmap containers: - name: dtp-server image: DTP_DOCKER_IMAGE # To inject JVM arguments into the container, specify the "env" property as in the example below, which injects JAVA_CONFIG_ARGS # env: # - name: JAVA_CONFIG_ARGS # value: "-Dparasoft.use.license.v2=true" args: ["--run", "dtp"] imagePullPolicy: Always ports: - name: "http-server" containerPort: 8080 - name: "https-server" containerPort: 8443 volumeMounts: - mountPath: "/usr/local/parasoft/data" name: dtp-data # Uncomment section below if you are setting up a custom keystore. Note that updates made to these files will not be reflected inside the container once it's been deployed; you will need to restart the container for it to contain any updates. # - name: keystore-cfgmap-volume # mountPath: "/usr/local/parasoft/dtp/tomcat/conf/.keystore" # subPath: keystore # - name: keystore-cfgmap-volume # mountPath: "/usr/local/parasoft/dtp/tomcat/conf/server.xml" # subPath: server-config # To prevent liveness probe failures on environments with low or overly taxed RAM/CPU, we recommend increasing the timeout seconds livenessProbe: exec: command: - healthcheck.sh - --verify - dtp initialDelaySeconds: 120 periodSeconds: 60 timeoutSeconds: 30 failureThreshold: 5 - name: data-collector image: DTP_DOCKER_IMAGE # To inject JVM arguments into the container, specify the "env" property as in the example below, which injects JAVA_DC_CONFIG_ARGS # env: # - name: JAVA_DC_CONFIG_ARGS # value: "-Dcom.parasoft.sdm.dc.traffic.max.length=250000 -Dcom.parasoft.sdm.dc.build.details.to.keep=5" args: ["--run", "datacollector", "--no-copy-data"] imagePullPolicy: Always ports: - containerPort: 8082 volumeMounts: - mountPath: "/usr/local/parasoft/data" name: dtp-data # To prevent liveness probe failures on environments with low or overly taxed RAM/CPU, we recommend increasing the timeout seconds livenessProbe: exec: command: - healthcheck.sh - --verify - datacollector initialDelaySeconds: 120 periodSeconds: 60 timeoutSeconds: 30 failureThreshold: 5 # Uncomment section below if using DTP with Extension Designer # - name: extension-designer # image: DTP_DOCKER_IMAGE # args: ["--run", "dtpservices"] # imagePullPolicy: Always # ports: # - containerPort: 8314 # volumeMounts: # - mountPath: "/usr/local/parasoft/data" # name: dtp-data # To prevent liveness probe failures on environments with low or overly taxed RAM/CPU, we recommend increasing the timeout seconds # livenessProbe: # exec: # command: # - healthcheck.sh # - --verify # - dtpservices # initialDelaySeconds: 120 # periodSeconds: 60 # timeoutSeconds: 30 # failureThreshold: 5 # Uncomment section below if using Extension Designer with an external MongoDB # env: # - name: DEP_USE_REMOTE_DB # value: "true" # - name: DEP_DB_HOSTNAME # value: "mongodb-hostname" # Put your mongodb hostname here # - name: DEP_DB_PORT # value: "27017" restartPolicy: Always serviceAccountName: parasoft-account imagePullSecrets: - name: YOUR_SECRET --- apiVersion: v1 kind: Service metadata: name: dtp namespace: parasoft-namespace spec: selector: app: DTP ports: - name: "http-server" protocol: TCP port: 8080 targetPort: 8080 - name: "data-collector" protocol: TCP port: 8082 targetPort: 8082 - name: "https-server" protocol: TCP port: 8443 targetPort: 8443 # Uncomment section below if using DTP with Extension Designer # - name: "extension-designer" # protocol: TCP # port: 8314 # targetPort: 8314 --- apiVersion: v1 kind: Service metadata: name: dtp-external namespace: parasoft-namespace spec: type: NodePort selector: app: DTP ports: - port: 8080 name: HTTP_PORT_NAME nodePort: XXXXX - port: 8082 name: DC_PORT_NAME nodePort: XXXXX - port: 8443 name: HTTPS_PORT_NAME nodePort: XXXXX # Uncomment section below if using DTP with Extension Designer # - port: 8314 # name: EXTENSION_DESIGNER_PORT_NAME # nodePort: XXXXX # SERVICE CONFIG NOTES: # 'name' can be whatever you want # 'nodePort' must be between 30000-32768 # 'spec.selector' must match 'metadata.labels' in pod config
Create the DTP environment
DTP startup can take up to 5 minutes. During the startup time, DTP will not be accessible from the browser.
Prepare the volume mount location on your cluster. By default, the image runs as the "parasoft" user with a UID of 1000 and GID of 1000. Prepare the volume such that this user has read and write access to it.
Then create the DTP environment defined in the DTP setup yaml file created previously:
kubectl create -f parasoft-dtp.yaml
This will initialize the contents of the persistent volume, however, additional setup is required for the DTP and Data Collector containers to run correctly.
If you injected JVM arguments into a container and want to verify their status, run the following command:
kubectl exec <POD_NAME> -c <CONTAINER_NAME> -- printenv
Setup DTP to connect to your database
Download and install the relevant JDBC driver in the persistent volume that is mounted to the DTP data directory.
DTP_DATA_DIR/lib/thirdparty/
Initialize the DTP database. For MySQL databases that exist in the same cluster:
kubectl exec dtp -c dtp-server -- cat dtp/grs/db/dtp/mysql/create.sql | kubectl exec -i <mysql pod name> -- mysql -u<username> -p<password>
Configure the database URL in the "<dtp-db-connection>" section of the PSTRootConfig.xml file. This file exists in the persistent volume that is mounted to the DTP data directory.
DTP_DATA_DIR/conf/PSTRootConfig.xml
Recreate the DTP environment
At this point the DTP environment is fully configured. In order for the changes to take effect, the containers must be restarted. To do this, simply destroy the environment:
kubectl delete -f parasoft-dtp.yaml
Then recreate it using the same command from Create the DTP environment.
If you are using DTP with Extension Designer, after you have completed the initial setup, you will need to update the Reverse Proxy settings in Extension Designer to reflect the expected hostname and the exposed ports for accessing DTP and Extension Designer.
Custom Truststore
Using a custom truststore in Kubernetes environments is similar to using a custom keystore as described above. Adjust the directions for using a custom keystore as appropriate. Note that the truststore location is /usr/local/parasoft/dtp/jre/lib/security/cacerts
.