Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Published by Scroll Versions from space DTPDEVEL and version 2022.1

...

Code Block
languageyml
titleparasoft-permissions.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: parasoft-namespace
---
# Stable access for clients to license server
kind: Service
apiVersion: v1
metadata:
  name: parasoft-service
  namespace: parasoft-namespace
spec:
  selector:
    tag: parasoft-service
  ports:
    - name: https
      port: 443
      protocol: TCP
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: parasoft-account
  namespace: parasoft-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleRole
metadata:
  name: parasoft-namespace-role
  namespace: parasoft-namespace
rules:
- apiGroups:
  - "*"
  resources:
  - "*"
  verbs:
  - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: parasoft-read-role
rules:
- apiGroups:
  - ""
  resources:
  - nodes
  - namespaces
  verbs:
  - get
  - read
  - list
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: parasoft-read-bind
  namespace: parasoft-namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: parasoft-read-role
subjects:
- kind: ServiceAccount
  name: parasoft-account
  namespace: parasoft-namespace
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBindingRoleBinding
metadata:
  name: parasoft-namespace-bind
  namespace: parasoft-namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRoleRole
  name: parasoft-namespace-role
subjects:
- kind: ServiceAccount
  name: parasoft-account
  namespace: parasoft-namespace

Use your yaml file to create those accounts and namespaces before creating the DTP environment:

Code Block
languagetext
kubectl create -f parasoft-permissions.yaml

DTP Setup

To set up DTP, create a yaml file that defines a secret (optional), volume, pod, internal-access service, and external-access service (optional). The secret is used to pull the DTP image from the repository. The pod is set up to run the DTP server and Data Collector in separate containers. Each container is configured with a volume to persist data and a liveness probe, which is the Kubernetes equivalent of a Docker Healthcheck. The internal-access service exposes the DTP pod to other pods, allowing them to communicate via the service name instead of an explicit IP address. The external-access service makes DTP and Data Collector accessible via external clients by allocating ports in the node and mapping them to ports in the pod. An example yaml file that might be used for this purpose is shown below. In the example, an NFS volume is used, but this is not required; use whatever volume type fits your needs.

You should see something similar to the output below in your console:

Code Block
languagetext
namespace/parasoft-namespace created
service/parasoft-service created
serviceaccount/parasoft-account created
role.rbac.authorization.k8s.io/parasoft-namespace-role created
clusterrole.rbac.authorization.k8s.io/parasoft-read-role created
clusterrolebinding.rbac.authorization.k8s.io/parasoft-read-bind created
rolebinding.rbac.authorization.k8s.io/parasoft-namespace-bind created
Warning

The "parasoft-namespace" namespace defined in the provided configuration is required and we recommend using the "parasoft-permissions.yaml" as it is documented. The service account used by the DTP Pod requires access to the "parasoft-namespace" namespace, therefore if you choose to create a custom permissions configuration that has different names for the resources defined in the provided permissions configuration, then a namespace with the name "parasoft-namespace" must also be created. If this namespace requirement is not met, DTP will treat any license installed as invalid.

DTP Setup

To set up DTP, create a yaml file that defines a secret (optional), volume, pod, internal-access service, and external-access service (optional). The secret is used to pull the DTP image from the repository. The pod is set up to run the DTP server and Data Collector in separate containers. Each container is configured with a volume to persist data and a liveness probe, which is the Kubernetes equivalent of a Docker Healthcheck. The internal-access service exposes the DTP pod to other pods, allowing them to communicate via the service name instead of an explicit IP address. The external-access service makes DTP and Data Collector accessible via external clients by allocating ports in the node and mapping them to ports in the pod. An example yaml file that might be used for this purpose is shown below. In the example, an NFS volume is used, but this is not required; use whatever volume type fits your needs.

Code Block
languageyml
titleparasoft-dtp.yaml
apiVersion: v1
kind: Pod
metadata:
  name: dtp
  namespace: parasoft-namespace
  labels:
    app: DTP
spec:
  volumes:
    - name: dtp-data
      nfs:
Code Block
languageyml
titleparasoft-dtp.yaml
apiVersion: v1
kind: Pod
metadata:
  name: dtp
  namespace: parasoft-namespace
  labels:
    app: DTP
spec:
  volumes:
    - name: dtp-data
      nfs:
        server: NFS_SERVER_HOST
        path: /dtp/
  containers:
    - name: dtp-server
      image: DTP_DOCKER_IMAGE
       args: ["--run", "dtp"]
      imagePullPolicy: Always
      ports:
        - name: "http-server"
          containerPort: 8080
        - name: "https-server"
          containerPortserver: 8443
NFS_SERVER_HOST
        volumeMountspath: /dtp/
  containers:
      - mountPathname: "/usr/local/parasoft/data"dtp-server
      image: DTP_DOCKER_IMAGE
   name   args: dtp-data["--run", "dtp"]
      livenessProbeimagePullPolicy: Always
        execports:
        -  command:name: "http-server"
          - healthcheck.shcontainerPort: 8080
        - name: - --verify"https-server"
          -containerPort: dtp8443
        initialDelaySecondsvolumeMounts: 120
        periodSeconds- mountPath: 30"/usr/local/parasoft/data"
        failureThreshold: 20
    - name: dtp-data-collector
      imagelivenessProbe:
 DTP_DOCKER_IMAGE
       argsexec: ["--run", "datacollector", "--no-copy-data"]

          imagePullPolicycommand: Always
      ports:
    - healthcheck.sh
   - containerPort: 8082
     - volumeMounts:--verify
          - mountPath: "/usr/local/parasoft/data"dtp
        initialDelaySeconds: 120
 name: dtp-data
      livenessProbeperiodSeconds: 30
        execfailureThreshold: 20
    - name: data-collector
    command:
  image: DTP_DOCKER_IMAGE
      args: ["-- healthcheck.shrun", "datacollector", "--no-copy-data"]
      imagePullPolicy: Always
   - --verify
  ports:
        - datacollectorcontainerPort: 8082
        initialDelaySecondsvolumeMounts:
 30
       - periodSecondsmountPath: 10
"/usr/local/parasoft/data"
         failureThreshold name: 5dtp-data
   restartPolicy: Always
  serviceAccountNamelivenessProbe:
 parasoft-account
  imagePullSecrets:
    - nameexec: YOUR_SECRET
---
apiVersion: v1
kind: Service
metadata:
  name: dtp
  namespace: parasoft-namespace
spec:
  selectorcommand:
    app: DTP
  ports:
    - name: "http-server"healthcheck.sh
      protocol: TCP
      port: 8080- --verify
      targetPort: 8080
   - -datacollector
 name: "data-collector"
      protocolinitialDelaySeconds: TCP30
      port  periodSeconds: 808210
      targetPort  failureThreshold: 80825
    - namerestartPolicy: "https-server"Always
      protocolserviceAccountName: TCPparasoft-account
      port: 8443imagePullSecrets:
    -  targetPortname: 8443YOUR_SECRET
---
apiVersion: v1
kind: Service
metadata:
  name: dtp-external
  namespace: parasoft-namespace
spec:
  type: NodePort
  selector:
    app: DTP
  ports:
    - portname: 8080"http-server"
      nameprotocol: HTTP_PORT_NAMETCP
      nodePortport: XXXXX8080
     - porttargetPort: 80828080
     - name: DC_PORT_NAME"data-collector"
      nodePortprotocol: XXXXXTCP
     - port: 84438082
      nametargetPort: HTTPS_PORT_NAME8082
    -  nodePortname: XXXXX

# SERVICE CONFIG NOTES:
# 'name' can be whatever you want
# 'nodePort' must be between 30000-32768
# 'spec.selector' must match 'metadata.labels' in pod config

Create the DTP environment

Prepare the volume mount location on your cluster. By default, the image runs as the "parasoft" user with a UID of 1000 and GID of 1000. Prepare the volume such that this user has read and write access to it.

Then create the DTP environment defined in the DTP setup yaml file created previously:

Code Block
languagetext
kubectl create -f parasoft-dtp.yaml

Once the container is running, initialize the DTP database.

Then obtain a shell in the dtp-server container to finalize the DTP setup:

Code Block
languagetext
kubectl exec -it dtp -c dtp-server -- bash

...

"https-server"
      protocol: TCP
      port: 8443
      targetPort: 8443
---
apiVersion: v1
kind: Service
metadata:
  name: dtp-external
  namespace: parasoft-namespace
spec:
  type: NodePort
  selector:
    app: DTP
  ports:
    - port: 8080
      name: HTTP_PORT_NAME
      nodePort: XXXXX
    - port: 8082
      name: DC_PORT_NAME
      nodePort: XXXXX
    - port: 8443
      name: HTTPS_PORT_NAME
      nodePort: XXXXX

# SERVICE CONFIG NOTES:
# 'name' can be whatever you want
# 'nodePort' must be between 30000-32768
# 'spec.selector' must match 'metadata.labels' in pod config

Create the DTP environment
Anchor
CreateDTPEnvironment
CreateDTPEnvironment

Prepare the volume mount location on your cluster. By default, the image runs as the "parasoft" user with a UID of 1000 and GID of 1000. Prepare the volume such that this user has read and write access to it.

Then create the DTP environment defined in the DTP setup yaml file created previously:

Code Block
languagetext
kubectl create -f parasoft-dtp.yaml

This will initialize the contents of the persistent volume, however, additional setup is required for the DTP and Data Collector containers to run correctly.

Set up DTP to connect to your database

Download and install the relevant JDBC driver in the persistent volume that is mounted to the DTP data directory.

Code Block
languagetext
DTP_DATA_DIR/lib/thirdparty/

Initialize the DTP database. For example, if you are connecting to a MySQL databases that exists in the same cluster:

Code Block
languagetext
kubectl exec dtp -c dtp-server -- cat dtp/grs/db/dtp/mysql/create.sql | kubectl exec -i <mysql pod name> -- mysql -u<username> -p<password>

Configure the database URL in the "<dtp-db-connection>" section of the PSTRootConfig.xml file. This file exists in the persistent volume that is mounted to the DTP data directory.

Code Block
languagetext
DTP_DATA_DIR/conf/PSTRootConfig.xml

Recreate the DTP environment

At this point the DTP environment is fully configured and . In order for the changes will to take effect after , the containers have restarted, which will happen automatically when the livenessProbes for the containers have reached their respective thresholds.To access the UI on a web browser, use the node ports allocated in the service definition as the address (for example, NODE_HOST:NODE_PORT)must be restarted. To do this, simply destroy the environment:

Code Block
languagetext
kubectl delete -f parasoft-dtp.yaml

Then recreate it using the same command from Create the DTP environment.