Deploying in Docker
Parasoft SOAtest and Virtualize as well as Data Repository can be deployed to Docker. Parasoft has published official Docker images to Docker Hub for your convenience. Full installation instructions are included in the readme with each image. Follow the links below for the image that best suites your needs:
- SOAVirt (standard installation, analogous to desktop application)
- SOAVirt Server (server installation, analogous to WAR file installation)
- Data Repository (remote, stand-alone data repository server)
Deploying SOAtest and Virtualize in Kubernetes
To deploy the soavirt server in Kubernetes, follow the process outlined below.
Prerequisites
First, a Persistent Volume and a Persistent Volume claim are needed. Create a Persistent Volume that can be shared by multiple nodes. It should be provisioned with 300GB of space and must have the ReadWriteMany access mode. This space will be used for the workspace of the soavirt server and will store configuration settings as well as Virtual Assets.
The default Persistent Volume Claim name is 'soavirt-pvc' and can be customized by updating the yaml definition of the soavirt server. The example shown below is a configuration to set up an NFS Persistent Volume and Persistent Volume Claim. While the example uses NFS, this is not required; use whatever persistent volume type fits your needs.
Warning: For NFS, the exported directory must have the same UID and GID as the Parasoft user that runs the container. For example, execute the command chown 1000:1000 <shared_path>
.
apiVersion: v1 kind: PersistentVolume metadata: name: nfs-pv spec: capacity: storage: 300Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: nfs mountOptions: - hard - nfsvers=4.1 nfs: path: <path> server: <ip address> --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: soavirt-pvc spec: storageClassName: nfs accessModes: - ReadWriteMany resources: requests: storage: 300Gi
Use the yaml file to create a Persistent Volume and a Persistent Volume claim:
kubectl create -f soavirt-pv.yaml
SOAVirt setup
Create the service that can be used to access soavirt server in Kubernetes. The example shown below exposes it using a node port, which provides a stable endpoint for applications to access Virtual Assets.
kind: Service apiVersion: v1 metadata: name: soavirt-service spec: selector: tag: soavirt type: NodePort ports: - name: http protocol: TCP port: 9080 targetPort: 9080 nodePort: 30080 - name: events protocol: TCP port: 9617 targetPort: 9617 nodePort: 30617 - name: statistics protocol: TCP port: 9618 targetPort: 9618 nodePort: 30618
Use the yaml file to create service that can be used to access soavirt server in Kubernetes:
kubectl create -f soavirt-service.yaml
Optional for Ingress users: To expose the service with Ingress, the following rule can be used with some modifications based on your Ingress setup. Be aware that Events and Statistics require TCP connections which may not be supported by all Ingress Controllers.
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: soavirt-ingress namespace: soavirt-server spec: rules: - host: soavirt.company.example.com http: paths: - path: / pathType: Prefix backend: service: name: soavirt-service port: number: 9080
Use the yaml file to create the Ingress rule:
kubectl create -f soavirt-ingress.yaml
Once the service is created, you will need to create the configuration map for the soavirt server and you will need to configure a network license or a local license.
The server EULA must be accepted by setting 'parasoft.eula.accepted=true' in the ConfigMap.
Warning: When connecting to CTP the property 'server.hostname' should be set with the address of the Service.
apiVersion: v1 kind: ConfigMap metadata: name: soavirt-config data: config.properties: | # Configuration properties for soavirt server # === END USER LICENSE AGREEMENT === # Set to true to accept the end user license agreement # Please review the EULA.txt file included in the distribution zip for soavirt.war parasoft.eula.accepted=false # === WORKING DIRECTORY === # Specifies workspace location #working.dir=C:/Users/../workspace # === LOGGING CONFIGURATION === # Specifies configuration file for logging logging.config.file=/WEB-INF/default.logging.xml # Replace with the following line to enable debug information #logging.config.file=/WEB-INF/debug.logging.xml # === CTP SERVER === # Specifies CTP server endpoint #env.manager.server=http\://[CTP Server Host]\:8080 # Specifies the server name that will be displayed in CTP #env.manager.server.name=[Server Name] # Specifies username for CTP authentication #env.manager.username=[CTP Server Username] # Specifies password for CTP authentication #env.manager.password=[CTP Server Password] # Enables notifications to CTP for deployments #env.manager.notify=true # === SERVLET CONTAINER === # Specifies the hostname to use for remote access to this server # Useful when a name or address must be strictly used for CTP connectivity # If empty, the address will be auto-detected #server.hostname=[Server Hostname] # Specifies port for http # Port should match your servlet container server.port.http=9080 # Specifies port for https # Port should match your servlet container #server.port.https=8443 # === PRODUCT LICENSING === # Enables virtualize functionality virtualize.license.enabled=true # Enables soatest functionality soatest.license.enabled=true # === NODE-LOCK LICENSE === # Specifies password for virtualize local license #virtualize.license.local.password=[Virtualize License Password] # Specifies password for soatest local license #soatest.license.local.password=[Soatest License Password] # === NETWORK LICENSE === # Enables network licensing for virtualize virtualize.license.use_network=true # Specifies the type of network license for virtualize ['performance_server_edition', 'runtime_server_edition', 'custom_edition'] virtualize.license.network.edition=custom_edition # Specifies features for virtualize 'custom_edition' license virtualize.license.custom_edition_features=Service Enabled, Performance, Extension Pack, Validate, Message Packs, Developer Sandbox 1000 Hits/Day, 10000 Hits/Day, 25000 Hits/Day, 50000 Hits/Day, 100000 Hits/Day, 500000 Hits/Day, 1 Million Hits/Day, Unlimited Hits/Day, 30 HPS, 100 HPS # Enables network licensing for soatest soatest.license.use_network=true # Specifies the type of network license for soatest ['server_edition', 'custom_edition'] soatest.license.network.edition=custom_edition # Specifies features for soatest 'custom_edition' license soatest.license.custom_edition_features=RuleWizard, Command Line, SOA, Web, Server API Enabled, Message Packs, Advanced Test Generation Desktop, Advanced Test Generation 5 Users, Advanced Test Generation 25 Users, Advanced Test Generation 100 Users, Requirements Traceability, API Security Testing # === LICENSE SERVER === # Enables using a specific license server # If true, the license network properties below will be used to retrieve a license # If false, the DTP server properties will be used to retrieve a license license.network.use.specified.server=true # Specifies license server URL, e.g., https://host[:port][/context-path] license.network.url=https\://[License Server Host]\:8443 # Enables http authentication for the license server license.network.auth.enabled=false # Specifies username for license server authentication #license.network.user=[License Server Username] # Specifies password for license server authentication #license.network.password=[License Server Password] # === DTP SERVER === # Specifies DTP server URL, e.g., https://host[:port][/context-path] #dtp.url=https\://[DTP Server Host]\:8443 # Specifies username for DTP authentication #dtp.user=[DTP Server Username] # Specifies password for DTP authentication #dtp.password=[DTP Server Password] # Specifies the name of the DTP project that you want to link to #dtp.project=[DTP Project] # === MISC === # Specifies scripting timeout in minutes #scripting.timeout.minutes=10 # Enables logging telemetry data #usage.reporting.enabled=true # === OIDC === # Enables or disables user authentication via OpenID Connect #oidc.enabled=false # Specifies the URI of the OpenID Connect server #oidc.issuer.uri= # Specifies the ID provided by your OpenID Connect server #oidc.client.id= # Specifies the method that will be used to authenticate the user on the OpenID Connect server #oidc.cli.mode=devicecode # Specifies the path to the token file containing user authentication information #oidc.devicecode.token.file= # === REPORTS === # Specifies a tag that represents a unique identifier for each run # e.g., ${config_name}-${project_module}-${scontrol_branch}-${exec_env} #session.tag=${config_name} # Specifies a build identifier used to label results #build.id=${dtp_project}-yyyy-MM-dd # Specifies data that should be included in the report #report.developer_errors=true #report.developer_reports=true #report.authors_details=true #report.testcases_details=false #report.test_suites_only=true #report.failed_tests_only=false #report.output_details=false #report.env_details=false #report.organize_security_findings_by=CWE #report.associations=false #report.assoc.url.pr= #report.assoc.url.fr= #report.assoc.url.task= #report.assoc.url.req= #report.assoc.url.test= # Specifies report format configuration ['html', 'pdf', 'xml', 'custom'] report.format=html #report.custom.extension= #report.custom.xsl.file= # Specifies installation directory for Jtest or dotTEST that generates coverage report #jtest.install.dir= #dottest.install.dir=
Use the yaml file to create the configuration map for the soavirt server:
kubectl create -f soavirt-config.yaml
The following creates the soavirt server. If a custom Persistent Volume Claim name was used in previous steps make sure to update the 'claimName' field to match the custom name.
Warning: When scaling beyond one replica, the events and statistics services should be disabled.
apiVersion: apps/v1 kind: StatefulSet metadata: name: soavirt labels: tag: soavirt spec: replicas: 1 selector: matchLabels: tag: soavirt serviceName: soavirt template: metadata: labels: tag: soavirt spec: volumes: - name: soavirt-pv persistentVolumeClaim: claimName: soavirt-pvc - name: soavirt-config configMap: name: soavirt-config containers: - name: soavirt image: parasoft/soavirt-server imagePullPolicy: IfNotPresent volumeMounts: - name: soavirt-pv mountPath: /usr/local/parasoft/soavirt/webapps/ROOT/workspace - name: soavirt-config mountPath: /usr/local/parasoft/soavirt/webapps/config.properties subPath: config.properties ports: - name: http containerPort: 9080 - name: events containerPort: 9617 - name: statistics containerPort: 9618 startupProbe: httpGet: path: /soavirt/api/v6/status?fields=machineId port: 9080 initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 30 failureThreshold: 3 livenessProbe: httpGet: path: /soavirt/api/v6/status?fields=machineId port: 9080 initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 30 readinessProbe: httpGet: path: /soavirt/api/v6/virtualAssets?fields=id port: 9080 initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 30 env: - name: CATALINA_OPTS value: "-Dparasoft.auto.deploy.new=false -Dparasoft.event.monitoring.broker.port=9617 -Dparasoft.server.statistics.broker.port=9618"
Use the yaml file to create the soavirt server:
kubectl create -f soavirt.yaml
Deploying CTP in Kubernetes
Using the Embedded HyperSQL Database
To deploy CTP in Kubernetes using the embedded HyperSQL database, follow the process outlined below.
Deploying multiple CTP servers in Kubernetes is not supported with this version. Support is limited to a single instance of CTP running in a Kubernetes cluster.
Prerequisites
First, a Persistent Volume and a Persistent Volume claim for exports storage are needed. It should be provisioned with around 10GB of space (this can be increased or decreased according to your needs) and ReadWriteOnce access mode is recommended. This space will be used for the workspace of the CTP server.
The default Persistent Volume Claim name is 'ctp-exports-storage' and can be customized by updating the yaml definition of the CTP server. The example shown below is a configuration to set up an NFS Persistent Volume and Persistent Volume Claim. While the example uses NFS, this is not required; use whatever persistent volume type fits your needs.
Warning: For NFS, the exported directory must have the same UID and GID as the Parasoft user that runs the container. For example, execute the command chown 1000:1000 <shared_path>
.
apiVersion: v1 kind: PersistentVolume metadata: name: ctp-exports-storage spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: <path> server: <ip address> --- # PersistentVolumeClaim for CTP exports folder apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ctp-exports-pvc spec: accessModes: - ReadWriteOnce storageClassName: nfs resources: requests: storage: 10Gi
Use the yaml file to create a Persistent Volume and a Persistent Volume claim:
kubectl create -f ctp-pv.yaml
Second, a Persistent Volume and a Persistent Volume claim for a SQL database are needed. It should be provisioned with around 50GB of space (this can be increased or decreased according to your needs) and ReadWriteOnce access mode is recommended.
The default Persistent Volume claim name is 'ctp-hsqldb-storage' and can be customized by updating the yaml definition of the CTP server. While the example uses NFS, this is not required; use whatever persistent volume type fits your needs.
Warning: For NFS, the exported directory must have the same UID and GID as the Parasoft user that runs the container. For example, execute the command chown 1000:1000 <shared_path>
.
apiVersion: v1 kind: PersistentVolume metadata: name: ctp-hsqldb-storage spec: capacity: storage: 50Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: <path> server: <ip address> --- # PersistentVolumeClaim for CTP HyperSQL DB apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ctp-hsqldb-pvc spec: accessModes: - ReadWriteOnce storageClassName: nfs resources: requests: storage: 50Gi
Use the yaml file to create a Persistent Volume and a Persistent Volume claim:
kubectl create -f ctp-hsqldb.yaml
CTP Deployment
Once the prerequisites have been met, you can deploy CTP in Kubernetes. If custom Persistent Volume Claim names were used in previous steps, make sure to update the appropriate 'volumeMounts:name' and 'claimName' fields to match the custom name.
The server EULA must be accepted by setting the ACCEPT_EULA value to "true" in the -env specifier. Additionally, to opt-in to sending anonymous usage data to Parasoft to help improve the product, change the USAGE_DATA value to "true" in the -env specifier.
apiVersion: v1 kind: Pod metadata: name: ctp-pod labels: app: ctp spec: containers: - name: ctp image: parasoft/ctp:latest ports: - containerPort: 8080 volumeMounts: - name: ctp-exports-storage mountPath: /usr/local/parasoft/exports - name: ctp-hsqldb-storage mountPath: /usr/local/parasoft/ctp/hsqldb env: # === USE BELOW TO CONFIGURE ENVIRONMENT VARIABLES === # Configures CTP to connect to license server at the specified base URL - name: LICENSE_SERVER_URL value: https://licenseserver:8443 # Configures CTP to use basic authentication when connecting to license server - name: LICENSE_SERVER_AUTH_ENABLED value: "false" # Configures CTP to connect to license server as the specified user # - name: LICENSE_SERVER_USERNAME # value: admin # Configures CTP to connect to license server with the specified password # - name: LICENSE_SERVER_PASSWORD # value: admin # Set to true or false to opt-in or opt-out of sending anonymous usage data to Parasoft - name: USAGE_DATA value: "false" # Accepts the End User License Agreement if set to true - name: ACCEPT_EULA value: "false" # === PROBES === startupProbe: httpGet: path: /em/resources/favicon.ico port: 8080 initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 30 failureThreshold: 3 livenessProbe: httpGet: path: /em/resources/favicon.ico port: 8080 initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 30 readinessProbe: httpGet: path: /em/healthcheck port: 8080 initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 30 volumes: - name: ctp-exports-storage persistentVolumeClaim: claimName: ctp-exports-pvc - name: ctp-hsqldb-storage persistentVolumeClaim: claimName: ctp-hsqldb-pvc --- apiVersion: v1 kind: Service metadata: name: ctp-service spec: selector: app: ctp type: NodePort ports: - protocol: TCP port: 8080 targetPort: 8080 nodePort: 30000
Use the yaml file to create service that can be used to access CTP in Kubernetes:
kubectl create -f ctp-pod.yaml
Using an External Database
To deploy CTP in Kubernetes using one of the supported external databases, follow the process outlined below.
Deploying multiple CTP servers in Kubernetes is not supported with this version. Support is limited to a single instance of CTP running in a Kubernetes cluster.
Prerequisites
First, Persistent Volumes and Persistent Volume claims for database configuration and exports storage are needed. They should be provisioned with around 1GB (for the database configuration) to 10GB (for exports storage) of space (this can be increased or decreased according to your needs) and ReadWriteOnce access mode is recommended. This space will be used for the workspace of the CTP server.
You must have a well-formatted db_config.xml present in the volume you are mounting. See the db_config.xml below for an example of one that is well-formatted. You can copy the example below into the volume you are mounting if you prefer; whatever configuration you need to do will be done within the application itself.
<?xml version="1.0" encoding="UTF-8"?> <configuration> <db_config> <connection> <url>jdbc:mysql://localhost:3306/em?useUnicode=true&characterEncoding=UTF-8&sessionVariables=sql_mode=NO_BACKSLASH_ESCAPES&useSSL=false&allowPublicKeyRetrieval=true</url> <username>em</username> <password>em</password> </connection> </db_config> </configuration>
The default Persistent Volume Claim names are 'ctp-config-storage' and 'ctp-exports-storage' and these names can be customized by updating the yaml definition of the CTP server. The example shown below is a configuration to set up NFS Persistent Volumes and Persistent Volume Claims. While the example uses NFS, this is not required; use whatever persistent volume type fits your needs.
Warning: For NFS, the exported directory must have the same UID and GID as the Parasoft user that runs the container. For example, execute the command chown 1000:1000 <shared_path>
.
# ==== Persistent Volume to Mount db_config.xml ==== apiVersion: v1 kind: PersistentVolume metadata: name: ctp-config-storage spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: <path> server: <ip address> --- # ==== PersistentVolumeClaim for db_config.xml ==== apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ctp-config-pvc spec: accessModes: - ReadWriteOnce storageClassName: nfs resources: requests: storage: 1Gi volumeName: "ctp-config-storage" --- # ==== Persistent Volume for Export Storage ==== apiVersion: v1 kind: PersistentVolume metadata: name: ctp-exports-storage spec: capacity: storage: 10Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: <path> server: <ip address> --- # ==== PersistentVolumeClaim for CTP exports folder ==== apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ctp-exports-pvc spec: accessModes: - ReadWriteOnce storageClassName: nfs resources: requests: storage: 10Gi volumeName: "ctp-exports-storage"
Use the yaml file to create Persistent Volumes and a Persistent Volume claims:
kubectl create -f ctp-pv.yaml
Second, a Persistent Volume and a Persistent Volume claim for the external database are needed. It should be provisioned with around 50GB of space (this can be increased or decreased according to your needs) and ReadWriteOnce access mode is recommended.
The default Persistent Volume claim names in the example below can be customized by updating the yaml definition of the CTP server. Uncomment the sections for database you are using. While the example uses NFS, this is not required; use whatever persistent volume type fits your needs. Be aware that the Persistent Volume and Persistent Volume claim mounts are for the database JDBC adapters, not the databases themselves.
Different yaml examples are included for each of the supported external databases. Use the one that's right for your environment.
Warning: For NFS, the exported directory must have the same UID and GID as the Parasoft user that runs the container. For example, execute the command chown 1000:1000 <shared_path>
.
MariaDB
# ==== Persistent Volume for MariaDB JDBC Adapter apiVersion: v1 kind: PersistentVolume metadata: name: ctp-mariadbadapter-storage spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: <path> server: <ip address> --- # ==== PersistentVolumeClaim for MariaDB JDBC Adapter ==== apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ctp-mariadbadapter-pvc spec: accessModes: - ReadWriteOnce storageClassName: nfs resources: requests: storage: 1Gi volumeName: "ctp-mariadbadapter-storage"
MySQL
# ==== Persistent Volume for MySQL JDBC Adapter apiVersion: v1 kind: PersistentVolume metadata: name: ctp-mysqladapter-storage spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: <path> server: <ip address> --- # ==== PersistentVolumeClaim for MySQL JDBC Adapter ==== apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ctp-mysqladapter-pvc spec: accessModes: - ReadWriteOnce storageClassName: nfs resources: requests: storage: 1Gi volumeName: "ctp-mysqladapter-storage"
Oracle
# ==== Persistent Volume for OracleDB JDBC Adapter apiVersion: v1 kind: PersistentVolume metadata: name: ctp-oracleadapter-storage spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: <path> server: <ip address> --- # ==== PersistentVolumeClaim for OracleDB JDBC Adapter ==== apiVersion: v1 kind: PersistentVolumeClaim metadata: name: ctp-oracleadapter-pvc spec: accessModes: - ReadWriteOnce storageClassName: nfs resources: requests: storage: 1Gi volumeName: "ctp-oracleadapter-storage"
Use the yaml file to create a Persistent Volume and a Persistent Volume claim:
kubectl create -f ctp-db.yaml
CTP Deployment
Once the prerequisites have been met, you can deploy CTP in Kubernetes. If custom Persistent Volume Claim names were used in previous steps, make sure to update the appropriate 'volumeMounts:name' and 'claimName' fields to match the custom name. Uncomment the sections for the database you are using.
The server EULA must be accepted by setting the ACCEPT_EULA value to "true" in the -env specifier. Additionally, to opt-in to sending anonymous usage data to Parasoft to help improve the product, change the USAGE_DATA value to "true" in the -env specifier.
apiVersion: v1 kind: Pod metadata: name: ctp-pod labels: app: ctp spec: containers: - name: ctp image: parasoft/ctp:latest ports: - containerPort: 8080 # Delete database.properties file to prevent overwriting of db_config.xml on pod startup command: [ "/bin/bash", "-c" ] args: - cd ctp/webapps/em/WEB-INF/classes/META-INF/spring/ && rm database.properties && cd ~ && ./entrypoint.sh volumeMounts: - name: ctp-config-storage mountPath: /usr/local/parasoft/ctp/webapps/em/config/db_config.xml subPath: db_config.xml - name: ctp-exports-storage mountPath: /usr/local/parasoft/exports # - name: ctp-hsqldb-storage # mountPath: /usr/local/parasoft/ctp/hsqldb # === DB JDBC Adapter Volume Mounts === # - name: ctp-mariadbadapter-storage # mountPath: /usr/local/parasoft/ctp/webapps/em/WEB-INF/lib/mariadb-java-client-3.0.8.jar # subPath: mariadb-java-client-3.0.8.jar # - name: ctp-mysqladapter-storage # mountPath: /usr/local/parasoft/ctp/webapps/em/WEB-INF/lib/mysql-connector-java-8.0.30.jar # subPath: mysql-connector-java-8.0.30.jar # - name: ctp-oracleadapter-storage # mountPath: /usr/local/parasoft/ctp/webapps/em/WEB-INF/lib/ojdbc8.jar # subPath: ojdbc8.jar env: # === USE BELOW TO CONFIGURE ENVIRONMENT VARIABLES === # Configures CTP to connect to license server at the specified base URL - name: LICENSE_SERVER_URL value: https://licenseserver:8443 # Configures CTP to use basic authentication when connecting to license server - name: LICENSE_SERVER_AUTH_ENABLED value: "false" # Configures CTP to connect to license server as the specified user # - name: LICENSE_SERVER_USERNAME # value: admin # Configures CTP to connect to license server with the specified password # - name: LICENSE_SERVER_PASSWORD # value: admin # Set to true or false to opt-in or opt-out of sending anonymous usage data to Parasoft - name: USAGE_DATA value: "false" # Accepts the End User License Agreement if set to true - name: ACCEPT_EULA value: "false" # === PROBES === startupProbe: httpGet: path: /em/resources/favicon.ico port: 8080 initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 30 failureThreshold: 3 livenessProbe: httpGet: path: /em/resources/favicon.ico port: 8080 initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 30 readinessProbe: httpGet: path: /em/healthcheck port: 8080 initialDelaySeconds: 30 periodSeconds: 30 timeoutSeconds: 30 volumes: - name: ctp-config-storage persistentVolumeClaim: claimName: ctp-config-pvc - name: ctp-exports-storage persistentVolumeClaim: claimName: ctp-exports-pvc # - name: ctp-hsqldb-storage # persistentVolumeClaim: # claimName: ctp-hsqldb-pvc # === SQL JDBC Adapter Volumes === # - name: ctp-mariadbadapter-storage # persistentVolumeClaim: # claimName: ctp-mariadbadapter-pvc # - name: ctp-mysqladapter-storage # persistentVolumeClaim: # claimName: ctp-mysqladapter-pvc # - name: ctp-oracleadapter-storage # persistentVolumeClaim: # claimName: ctp-oracleadapter-pvc --- # ==== CTP Service Definition ==== apiVersion: v1 kind: Service metadata: name: ctp-service spec: selector: app: ctp type: NodePort ports: - protocol: TCP port: 8080 targetPort: 8080 nodePort: 30000
Use the yaml file to create service that can be used to access CTP in Kubernetes:
kubectl create -f ctp-pod.yaml