To deploy the Data Repository server in Kubernetes, follow the process outlined below.
Prerequisites
First, a Persistent Volume and a Persistent Volume claim are needed. It should be provisioned with 300GB of space and should have the ReadWriteOnce access mode. This space will be used to store the data of the Data Repository server.
The default Persistent Volume Claim name is 'datarepo-pvc' and can be customized by updating the yaml definition of the soavirt server. The example shown below is a configuration to set up an NFS Persistent Volume and Persistent Volume Claim. While the example uses NFS, this is not required; use whatever persistent volume type fits your needs.
Warning: For NFS, the exported directory must have the same UID and GID as the Parasoft user that runs the container. For example, execute the command chown 1000:1000 <SHARED_PATH>
.
apiVersion: v1 kind: PersistentVolume metadata: name: datarepo-pv spec: capacity: storage: 300Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs mountOptions: - hard - nfsvers=4.1 nfs: path: <path> server: <ip_address> --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: datarepo-pvc spec: storageClassName: nfs accessModes: - ReadWriteOnce resources: requests: storage: 300Gi
Use the yaml file to create the Persistent Volume and a Persistent Volume Claim:
kubectl create -f datarepo-pv.yaml
Create a secret to contain the username and password for Data Repository login. One way to create such a secret is with the following command:
kubectl create secret generic datarepo-login \ --from-literal=username=<username> \ --from-literal=password=<password>
Data Repository Server Setup
The example yaml below configures a pod for the Data Repository server and exposes it externally via port 30017. If you used a custom name for the Persistent Volume Claim previously, make sure to update the 'claimName' field to match the custom name. If NFS is used for the persistent volume, follow the instruction in the comment inside the yaml file to configure the 'runAsUser' UID.
apiVersion: v1 kind: Pod metadata: name: datarepo labels: app: datarepo spec: # If using NFS for the persistent volume, uncomment the following lines and run the pod with the UID of the owner of the <shared_path>. # securityContext: # runAsUser: <UID> volumes: - name: datarepo-volume persistentVolumeClaim: claimName: datarepo-pvc # To run the Data Repository server with SSL, uncomment the following lines # - name: datarepo-sslcert-file # configMap: # name: datarepo-sslcert containers: - name: datarepo image: mongo:latest imagePullPolicy: IfNotPresent volumeMounts: - name: datarepo-volume mountPath: /data/db # To run the Data Repository server with SSL, uncomment the following lines # - name: datarepo-sslcert-file # mountPath: /etc/sslCert.pem # subPath: sslCert.pem # command: ["mongod", "--ipv6", "--bind_ip_all", "--sslMode", "requireSSL", "--sslPEMKeyFile", "/etc/sslCert.pem", "--auth"] startupProbe: exec: # To run the Data Repository server with SSL or not, choose one of the following lines to uncomment # command: ["/bin/sh", "-c" "mongosh --host 127.0.0.1 --port 27017 --ssl --sslAllowInvalidCertificates --quiet test --eval 'quit(0)'"] command: ["/bin/sh", "-c" "mongosh --host 127.0.0.1 --port 27017 --quiet test --eval 'quit(0)'"] initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 10 failureThreshold: 3 livenessProbe: exec: # To run the Data Repository server with SSL or not, choose one of the following lines to uncomment # command: ["/bin/sh", "-c" "mongosh --host 127.0.0.1 --port 27017 --ssl --sslAllowInvalidCertificates --quiet test --eval 'quit(0)'"] command: ["/bin/sh", "-c" "mongosh --host 127.0.0.1 --port 27017 --quiet test --eval 'quit(0)'"] initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 10 env: - name: MONGO_INITDB_ROOT_USERNAME valueFrom: secretKeyRef: name: datarepo-login key: username optional: false - name: MONGO_INITDB_ROOT_PASSWORD valueFrom: secretKeyRef: name: datarepo-login key: password optional: false --- apiVersion: v1 kind: Service metadata: name: datarepo labels: run: datarepo spec: type: NodePort ports: - port: 27017 protocol: TCP targetPort: 27017 nodePort: 30017 selector: app: datarepo
Use the yaml file to start the pod and the service:
kubectl create -f datarepo.yaml
Restarting Data Repository Server with SSL
Once you have set up Data Repository server in Kubernetes, you can restart it with SSL. To begin, tear down the existing pod and service:
kubectl delete -f datarepo.yaml
Then deploy the SSL certificate file into Kubernetes:
kubectl create configmap datarepo-sslcert \ --from-file=sslCert.pem=<path-to-file>
Reconfigure the datarepo.yaml file above according to the directions found within it to start the Data Repository server with SSL. Then use that yaml to start the pod and service:
kubectl create -f datarepo.yaml
Registering the Data Repository Server with CTP
To register the Data Repository server with Parasoft Test Data in CTP, run the following command:
curl -H "Content-Type: application/json" -H "Accept: application/json" \ -d "{\"alias\":\"<ALIAS>\",\"host\":\"<NODE>\",\"port\":<PORT>,\"user\":\"<DR_USERNAME>\",\"password\":\"<DR_PASSWORD>\",\"ssl\":<USE_SSL>}" \ --user "<CTP_USERNAME>:<CTP_PASSWORD>" "<CTP_URL>/em/tdm/api/v2/servers"
The variables in the curl command are described in the table below. Replace them with the appropriate configurations for your setup.
Variable | Description |
---|---|
<ALIAS> | The name that is displayed on CTP to represent this Data Repository server |
<NODE> | The Kubernetes node on which this Data Repository server is run |
<PORT> | The node port number at which this Data Repository server is exposed |
<DR_USERNAME> | Username of the login that is used to initialize this Data Repository server |
<DR_PASSWORD> | Password of the login that is used to initialize this Data Repository server |
<USE_SSL> | Whether this Data Repository server requires SSL, either true or false |
<CTP_USERNAME> | Username of the CTP login |
<CTP_PASSWORD> | Password of the CTP login |
<CTP_URL> | The URL to CTP, of the format https://[CTP Server Host]:[Port] |