Introduction
Kubernetes scalability models (horizontal and vertical) are optimized for scaling applications that process streams of independent requests and allow dynamic Pod restarts in response to changing scalability requirements. In addition, the horizontal model allows for multiple Pods running behind one port exposed by the Service to the outside world. These models do not fit well into the Load Test scalability mechanism where a Load Test controller establishes a permanent connection to a remote machine on a specific host and port at the beginning of a load test. This connection has to stay open for the entire duration of the load test. If a Pod running Load Test remote machine is restarted in response to changed scalability requirements, the connection to the controller will be lost and this machine/Pod will no longer participate in the current load test. In addition, if multiple Pods are running behind a Service, a permanent connection will be established to only one of them, making other Pods idle for the duration of the load test. For this reason, we do not recommend using Kubernetes horizontal and vertical auto scaling with Load Test. You can, however, use Kubernetes cluster computing resources to scale up the load tests as outlined in this document.
Deploying Load Test Servers to Kubernetes
To deploy load test servers to Kubernetes, first create a YAML file with the following content (for this example, we are calling it loadtest-pod-1.yaml):
apiVersion: v1 kind: Pod metadata: name: loadtest-1 labels: app: loadtest-1 spec: containers: - name: loadtest-1 image: parasoft/soavirt imagePullPolicy: IfNotPresent ports: - name: loadtest-port-1 containerPort: 8189 command: [ "/usr/local/parasoft/soavirt/loadtest" ] args: [ "-loadtestserver" ] env: - name: ACCEPT_EULA value: "true" # If you expect other Pods to be running on the same node, make sure to # reserve permanent resources for the Load Test server. Assign the CPU # and Memory amounts according to the expected load level. # You do not have to do it if Load Test server is the only Pod on a # Node – in this case all Node’s resource will be available to Load Test. resources: requests: memory: "4Gi" cpu: "4" limits: memory: "4Gi" cpu: "4" # Map each Load Test server to a specific Node, so that when # you run a load test you can monitor each Node resource # utilization and adjust the remote machine's load accordingly. # Set the nodeName value to the actual name of the Node you selected # for this Load Test server instance: nodeName: YOUR_NODE_NAME # If you expect other Pods to be running on the same Node use # Guaranteed QoS class to insure availability of Node’s resources. status: qosClass: Guaranteed --- kind: Service apiVersion: v1 metadata: name: loadtest-1 spec: selector: app: loadtest-1 type: NodePort ports: - name: loadtest-port-1 protocol: TCP port: 8189 targetPort: 8189 nodePort: 30089
Use this YAML file to create a Service and a Pod for a single Load Test server by executing the following command (using the name of your YAML file in place of "loadtest-pod-1.yaml"):
kubectl apply -f loadtest-pod-1.yaml
You should see the following output:
pod/loadtest-1 created service/loadtest-1 created
To deploy more Load Test server instances:
- Create a copy of the YAML file above and rename it (for example, to loadtest-pod-2.yaml).
- Inside the file rename
loadtest-1
toloadtest-2
andloadtest-port-1
toloadtest-port-2
. - Set Service
nodePort
to a different value (for example30099
). - Do not change the
8189
port values.
Use the YAML file to create a second Load Test Service/Pod instance (using the name of your YAML file in place of "loadtest-pod-2.yaml"):
kubectl apply -f loadtest-pod-2.yaml
Repeat the process if you need more Load Test Pods.
Setting Up Load Test Remote Machines with Your Load Test Configuration
Add the Kubernetes Load Test remote machines to your Load Test Configuration (see Running Load Tests on Remote Machines for more information). You can right-click a remote machine and choose Verify to check its availability.
If your application under test (AUT) is not exposed outside the Kubernetes cluster, set the load level for the controller machine (localhost) to zero as shown on the screenshot below. The Load Test controller will be running outside the cluster and will not see the internal IP address of your AUT. The load generators, on the other hand will be running inside the cluster and will have access to the AUT’s internal IP address.
Running the Load Test
Use a Kubernetes monitoring tool of your choice (for example, Prometheus/Graphana) to observe the CPU and Memory utilization of the Nodes to which you assigned the Load Test server Pods. The CPU and Memory utilization should not exceed 75-80% on average to ensure accuracy of Load Test scenario execution and test execution time measurements.