...
- Certain conditions must be met for the machineId to remain the same when new containers are started. Those conditions are:
- The container must be created and started with the root user.
- The Docker socket must be mounted.
- The Docker volume "parasoft-volume" created previously must be mounted.
- DTP_DATA_DIR:/usr/local/parasoft/data
- Mount persistent volume at DTP_DATA_DIR.
You can modify inject JAVA_CONFIG_ARGS and JAVA_DC_CONFIG_ARGS options via environment variable options passed to environment variables into the DTP docker container. For example:
Code Block | ||
---|---|---|
| ||
-e JAVA_CONFIG_ARGS="-Dcom.parasoft.sdm.storage.managers.admin.enable.delete.project.data=true" -e JAVA_DC_CONFIG_ARGS="-Dcom.parasoft.sdm.dc.traffic.max.length=250000" |
...
Warning |
---|
The "parasoft-namespace" namespace defined in the provided configuration is required and we recommend using the "parasoft-permissions.yaml" as it is documented. The service account used by the DTP Pod requires access to the "parasoft-namespace" namespace, therefore if you choose to create a custom permissions configuration that has different names for the resources defined in the provided permissions configuration, then a namespace with the name "parasoft-namespace" must also be created. If this namespace requirement is not met, DTP will treat any license installed as invalid. |
DTP Setup
To set up DTP, create a yaml file that defines a secret (optional), volume, pod, internal-access service, and external-access service (optional). The secret is used to pull the DTP image from the repository. The pod is set up to run the DTP server and Data Collector in separate containers. Each container is configured with a volume to persist data and a liveness probe, which is the Kubernetes equivalent of a Docker Healthcheck. The internal-access service exposes the DTP pod to other pods, allowing them to communicate via the service name instead of an explicit IP address. The external-access service makes DTP and Data Collector accessible via external clients by allocating ports in the node and mapping them to ports in the pod. An example yaml file that might be used for this purpose is shown below. In the example, an NFS volume is used, but this is not required; use whatever volume type fits your needs.
Custom Keystore
If you want to set up a custom keystore, you will need to create a configuration map for the ".keystore" and "server.xml" files. The command below creates a configuration map called "keystore-cfgmap" with file mappings for the custom ".keystore" and "server.xml" files. In this example, each file mapping is given a key: "keystore" for the .keystore file and "server-config" for the server.xml file. While giving each file mapping a key is not necessary, it is useful when you don't want the key to be the file name.
Code Block | ||
---|---|---|
| ||
~$ kubectl create configmap keystore-cfgmap --from-file=keystore=/path/to/.keystore --from-file=server-config=/path/to/server.xml
configmap/keystore-cfgmap created |
DTP Setup
To set up DTP, create a yaml file that defines a secret (optional), volume, pod, internal-access service, and external-access service (optional). The secret is used to pull the DTP image from the repository. The pod is set up to run the DTP server and Data Collector in separate containers. Each container is configured with a volume to persist data and a liveness probe, which is the Kubernetes equivalent of a Docker Healthcheck. The internal-access service exposes the DTP pod to other pods, allowing them to communicate via the service name instead of an explicit IP address. The external-access service makes DTP and Data Collector accessible via external clients by allocating ports in the node and mapping them to ports in the pod. An example yaml file that might be used for this purpose is shown below. In the example, an NFS volume is used, but this is not required; use whatever volume type fits your needs.
Code Block | ||||
---|---|---|---|---|
| ||||
apiVersion: v1
kind: Pod
metadata:
name: dtp
namespace: parasoft-namespace
labels:
app: DTP
spec:
volumes:
- name: dtp-data
nfs:
server: NFS_SERVER_HOST
path: /dtp/
# Uncomment section below if you are setting up a custom keystore; you will also need to uncomment out the associated volumeMounts below
# - name: keystore-cfgmap-volume
# configMap:
# name: keystore-cfgmap
containers:
- name: dtp-server
image: DTP_DOCKER_IMAGE
# To inject JVM arguments into the container, specify the "env" property as in the example below, which injects JAVA_CONFIG_ARGS
# env:
# - name: JAVA_CONFIG_ARGS
# value: "-Dparasoft.use.license.v2=true"
args: ["--run", "dtp"]
imagePullPolicy: Always
ports:
- name: "http-server"
containerPort: 8080
- name: "https-server"
containerPort: 8443
volumeMounts:
- mountPath: "/usr/local/parasoft/data"
name: dtp-data
# Uncomment section below if you are setting up a custom keystore. Note that updates made to these files will not be reflected inside the container once it's been deployed; you will need to restart the container for it to contain any updates.
# - name: keystore-cfgmap-volume
# mountPath: "/usr/local/parasoft/dtp/tomcat/conf/.keystore"
# | ||||
Code Block | ||||
| ||||
apiVersion: v1 kind: Pod metadata: name: dtp namespace: parasoft-namespace labels: app: DTP spec: volumes: - name: dtp-data nfs: server: NFS_SERVER_HOST path subPath: /dtp/keystore # containers: - name: dtpkeystore-cfgmap-servervolume # image: DTP_DOCKER_IMAGE argsmountPath: ["--run", "dtp"] "/usr/local/parasoft/dtp/tomcat/conf/server.xml" # imagePullPolicysubPath: Alwaysserver-config # To prevent liveness probe ports: failures on environments with low or overly taxed - name: "http-server" RAM/CPU, we recommend increasing the timeout seconds containerPortlivenessProbe: 8080 - name: "https-server"exec: containerPortcommand: 8443 volumeMounts: - mountPath: "/usr/local/parasoft/data"healthcheck.sh name:- dtp--dataverify livenessProbe: - dtp execinitialDelaySeconds: 120 periodSeconds: command:60 timeoutSeconds: 30 - healthcheck.sh failureThreshold: 5 - -name: data--verifycollector image: DTP_DOCKER_IMAGE # To inject JVM arguments into -the dtp container, specify the "env" property as in the initialDelaySeconds: 120 example below, which injects JAVA_DC_CONFIG_ARGS # periodSecondsenv: 30 # - failureThresholdname: 20 JAVA_DC_CONFIG_ARGS # - name: data-collector image: DTP_DOCKER_IMAGE value: "-Dcom.parasoft.sdm.dc.traffic.max.length=250000 -Dcom.parasoft.sdm.dc.build.details.to.keep=5" args: ["--run", "datacollector", "--no-copy-data"] imagePullPolicy: Always ports: - containerPort: 8082 volumeMounts: - mountPath: "/usr/local/parasoft/data" name: dtp-data # To prevent liveness probe failures on environments with low or overly taxed RAM/CPU, we recommend increasing the name: dtp-datatimeout seconds livenessProbe: exec: command: - healthcheck.sh - --verify - datacollector initialDelaySeconds: 30120 periodSeconds: 1060 timeoutSeconds: 30 failureThreshold: 5 # Uncomment section below if using DTP with Extension Designer # - name: extension-designer # image: DTP_DOCKER_IMAGE # args: ["--run", "dtpservices"] # imagePullPolicy: Always # ports: # - containerPort: 8314 # volumeMounts: # - mountPath: "/usr/local/parasoft/data" # name: dtp-datadata # To prevent liveness probe failures on environments with low or overly taxed RAM/CPU, we recommend increasing the timeout seconds # livenessProbe: # exec: # command: # - healthcheck.sh # - --verify # - dtpservices # initialDelaySeconds: 30120 # periodSeconds: 1060 # timeoutSeconds: 30 # failureThreshold: 5 # Uncomment section below if using Extension Designer with an external MongoDB # env: # - name: DEP_USE_REMOTE_DB # value: "true" # - name: DEP_DB_HOSTNAME # value: "mongodb-hostname" # Put your mongodb hostname here # - name: DEP_DB_PORT # value: "27017" restartPolicy: Always serviceAccountName: parasoft-account imagePullSecrets: - name: YOUR_SECRET --- apiVersion: v1 kind: Service metadata: name: dtp namespace: parasoft-namespace spec: selector: app: DTP ports: - name: "http-server" protocol: TCP port: 8080 targetPort: 8080 - name: "data-collector" protocol: TCP port: 8082 targetPort: 8082 - name: "https-server" protocol: TCP port: 8443 targetPort: 8443 # Uncomment section below if using DTP with Extension Designer # - name: "extension-designer" # protocol: TCP # port: 8314 # targetPort: 8314 --- apiVersion: v1 kind: Service metadata: name: dtp-external namespace: parasoft-namespace spec: type: NodePort selector: app: DTP ports: - port: 8080 name: HTTP_PORT_NAME nodePort: XXXXX - port: 8082 name: DC_PORT_NAME nodePort: XXXXX - port: 8443 name: HTTPS_PORT_NAME nodePort: XXXXX # Uncomment section below if using DTP with Extension Designer # - port: 8314 # name: EXTENSION_DESIGNER_PORT_NAME # nodePort: XXXXX # SERVICE CONFIG NOTES: # 'name' can be whatever you want # 'nodePort' must be between 30000-32768 # 'spec.selector' must match 'metadata.labels' in pod config |
...
This will initialize the contents of the persistent volume, however, additional setup is required for the DTP and Data Collector containers to run correctly.DTP and Data Collector containers to run correctly.
If you injected JVM arguments into a container and want to verify their status, run the following command:
Code Block | ||
---|---|---|
| ||
kubectl exec <POD_NAME> -c <CONTAINER_NAME> -- printenv |
Setup DTP to connect to your database
...
Note |
---|
If you are using DTP with Extension Designer, after you have completed the initial setup you will need to update the Reverse Proxy settings in Extension Designer to reflect the expected hostname and the exposed ports for accessing DTP and Extension Designer. |
Custom Truststore
Using a custom truststore in Kubernetes environments is similar to using a custom keystore as described above. Adjust the directions for using a custom keystore as appropriate. Note that the truststore location is /usr/local/parasoft/dtp/jre/lib/security/cacerts
.