...
Code Block |
---|
kubectl create namespace parasoft-lss-namespace |
Note: The namespace name "parasoft-lss-namespace" is used throughout this documentation in command and resource examples. If you use a different name for your namespace, be sure to change any instances of "parasoft-lss-namespace" in those examples to your namespace name.
...
Create the License Server Environment
To create the License Server environment, you will first need a yaml file that defines a secret Secret (optional), a volume, a podPod or StatefulSet, and a service Service (optional). The secret Secret is used to pull the License Server image from the repository. The Pod or StatefulSet creates a pod is set up to run a License Server container configured with a volume to persist data and a liveness probe for the container health. The service Service makes License Server accessible via external clients by allocating ports in the node and mapping them to ports in the pod. An example yaml file Example yaml files for a Pod or StatefulSet (both called "parasoft-lss.yaml" is ) are shown below. This example uses These examples use an NFS volume, but that is not required; use the volume type that fits your needs best.
Warning |
---|
Once License Server has been deployed using Pod or StatefulSet, switching the Kind will invalidate machine-locked licenses. |
Example yaml using 'kind: Pod'
Code Block | ||||
---|---|---|---|---|
| ||||
Code Block | ||||
| ||||
apiVersion: v1 kind: Pod metadata: name: lss namespace: parasoft-lss-namespace labels: app: LSS spec: volumes: - name: lss-data nfs: server: NFS_SERVER_HOST path: /lss/ # Uncomment section below if you are setting up a custom keystore; you will also need to uncomment out the associated volumeMounts below # - name: keystore-cfgmap-volume # configMap: # name: keystore-cfgmap securityContext: runAsNonRoot: true containers: - name: lss-server securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] seccompProfile: type: RuntimeDefault image: LSS_DOCKER_IMAGE imagePullPolicy: Always env: - name: PARASOFT_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: PARASOFT_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace # To inject JVM arguments into the container, specify the "env" property as in the example below, which injects LSS_JAVA_OPTS # - name: LSS_JAVA_OPTS # value: "-Dparasoft.use.license.v2=true" ports: - containerPort: 8080 name: "http-server" - containerPort: 8443 name: "https-server" volumeMounts: - mountPath: "/usr/local/parasoft/license-server/data" name: lss-data # Uncomment section below if you are setting up a custom keystore. Note that updates made to these files will not be reflected inside the container once it's been deployed; you will need to restart the container for it to contain any updates. # - name: keystore-cfgmap-volume # mountPath: "/usr/local/parasoft/license-server/app/tomcat/conf/.keystore" # subPath: keystore # - name: keystore-cfgmap-volume # mountPath: "/usr/local/parasoft/license-server/app/tomcat/conf/server.xml" # subPath: server-config # To prevent liveness probe failures on environments with low or overly taxed RAM/CPU, we recommend increasing the timeout seconds livenessProbe: exec: command: - healthcheck.sh initialDelaySeconds: 120 periodSeconds: 60 timeoutSeconds: 30 failureThreshold: 5 restartPolicy: Always serviceAccountName: parasoft-account imagePullSecrets: - name: YOUR_SECRET --- apiVersion: v1 kind: Service metadata: name: lss namespace: parasoft-lss-namespace spec: type: NodePort selector: app: LSS ports: - port: 8080 name: PORT_NAME_1 nodePort: XXXXX - port: 8443 name: PORT_NAME_2 nodePort: XXXXX # SERVICE CONFIG NOTES: # 'name' can be whatever you want # 'nodePort' must be between 30000-32768 # 'spec.selector' must match 'metadata.labels' in pod config |
Use the yaml file to create the LSS environment:Example yaml using 'kind: StatefulSet'
Code Block | ||||||
---|---|---|---|---|---|---|
| kubectl create -f
| |||||
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: lss
namespace: parasoft-lss-namespace
labels:
app: LSS
spec:
selector:
matchLabels:
app: LSS
serviceName: lss-service
replicas: 1
template:
metadata:
labels:
app: LSS
spec:
volumes:
- name: lss-data
nfs:
server: NFS_SERVER_HOST
path: /lss/
# persistentVolumeClaim:
# claimName: lss-pvc
# Uncomment section below if you are setting up a custom keystore; you will also need to uncomment out the associated volumeMounts below
# - name: keystore-cfgmap-volume
# configMap:
# name: keystore-cfgmap
securityContext:
runAsNonRoot: true
containers:
- name: lss-server
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: [ "ALL" ]
seccompProfile:
type: RuntimeDefault
image: LSS_DOCKER_IMAGE
imagePullPolicy: Always
env:
- name: PARASOFT_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: PARASOFT_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# To inject JVM arguments into the container, specify the "env" property as in the example below, which injects LSS_JAVA_OPTS
# - name: LSS_JAVA_OPTS
# value: "-Dparasoft.use.license.v2=true"
ports:
- containerPort: 8080
name: "http-server"
- containerPort: 8443
name: "https-server"
volumeMounts:
- name: lss-data
mountPath: "/usr/local/parasoft/license-server/data"
# Uncomment section below if you are setting up a custom keystore. Note that updates made to these files will not be reflected inside the container once it's been deployed; you will need to restart the container for it to contain any updates.
# - name: keystore-cfgmap-volume
# mountPath: "/usr/local/parasoft/license-server/app/tomcat/conf/.keystore"
# subPath: keystore
# - name: keystore-cfgmap-volume
# mountPath: "/usr/local/parasoft/license-server/app/tomcat/conf/server.xml"
# subPath: server-config
# To prevent liveness probe failures on environments with low or overly taxed RAM/CPU, we recommend increasing the timeout seconds
livenessProbe:
exec:
command:
- healthcheck.sh
initialDelaySeconds: 120
periodSeconds: 60
timeoutSeconds: 30
failureThreshold: 5
restartPolicy: Always
serviceAccountName: parasoft-account
imagePullSecrets:
- name: YOUR_SECRET
---
apiVersion: v1
kind: Service
metadata:
name: lss
namespace: parasoft-lss-namespace
spec:
type: NodePort
selector:
app: LSS
ports:
- port: 8080
name: PORT_NAME_1
nodePort: XXXXX
- port: 8443
name: PORT_NAME_2
nodePort: XXXXX
# SERVICE CONFIG NOTES:
# 'name' can be whatever you want
# 'nodePort' must be between 30000-32768
# 'spec.selector' must match 'metadata.labels' in pod config |
Use the yaml file to create the LSS environment:
Code Block | ||
---|---|---|
| ||
kubectl create -f parasoft-lss.yaml |
To access the UI on a web browser, use the node ports allocated in the service definition as the address (for example, NODE_HOST:NODE_PORT).
...
- Copy log4j.xml from the
<INSTALL_DIR>/app/
directory to<INSTALL_DIR>/data/
. Open the log4j.xml file in
<INSTALL_DIR>/data/
and add the following logger in Loggers element:Code Block language text <Logger name="com.parasoft.xtest" level="ALL"> <AppenderRef ref="CONSOLE" /> </Logger>
Find commented-out section for LSS_JAVA_OPTS in the yaml file, uncomment it, then add the following as the value for LSS_JAVA_OPTS:
Code Block language yml -Dparasoft.cloudvm.verbose=true -Dparasoft.logging.config.file=/usr/local/parasoft/license-server/data/log4j.xml
- Restart the application.
Additional logging will go to catalina log file (stdout). You can run this command to get the log file to local file system (replace "lss-pod1-nfs" with your pod name and "parasoft-lss-namespace" with the namespace you used):
Code Block language text kubectl logs lss-pod1-nfs -n parasoft-lss-namespace > lss-debug.log
Deploying License Server in Kubernetes with a Helm Chart
...
-lss-namespace" with the namespace you used):
Code Block language text kubectl logs lss-pod1-nfs -n parasoft-lss-namespace > lss-debug.log
Deploying License Server in Kubernetes with a Helm Chart
Parasoft has published an official Helm chart to Docker Hub for your convenience. Full installation instructions are included in the readme. See https://hub.docker.com/r/parasoft/lss-helm.
Troubleshooting
machineID is LINUX2-0
This issue can occur when there is an underlying permission issue. To resolve it, try the following options:
- Search the tests.log file found in the <
DATA_DIR>/logs/
directory for the error: "Kubernetes API call fails with status=403 error". - Verify that you have created permissions required by License Server using parasoft-permissions.yaml.
- Note: if you are upgrading, make sure to use the parasoft-permissions.yaml for the version to which you are upgrading.
- Confirm that all Parasoft-required resources are using the same namespace.