In this section:
Overview
Generally speaking, there are three things to consider when deploying a coverage agent in a Kubernetes environment:
- Inserting the coverage agent into in the container for the application under test (AUT).
- Invoking the coverage agent during container startup.
- Accessing the coverage agent from either inside or outside the Kubernetes cluster.
This page will discuss these considerations in broad terms while giving examples using the Spring Petclinic project as an AUT. You are encouraged to download the project from GitHub (https://github.com/parasoft/spring-petclinic-microservices) and follow along.
The Java agent will be used in the for this documentation, but the process should be similar for dotNet use cases.
Installing Coverage Agents
You need to insert the required components for gathering runtime coverage into the container for your AUT. For Java, this consists of the agent.jar along with its properties, which can either be directly injected into the command that launches the agent or done with a properties file. If you are using custom images, this can be done at when the image is built. If not, you can use volumes to mount the required files into the container. There are multiple ways to do this, including using an image that already contains the agent.jar file as an initContainer
combined with a shared emptyDir
volume to give the primary container access to the jar or directly mounting a volume which already contains the agent.jar and a properties file for it. Both of these methods are demonstrated in the examples below.
For detailed information about Kubernetes volumes, see the Kubernetes documentation: https://kubernetes.io/docs/concepts/storage/volumes/.
initContainer Method
This method uses an image that already contains the agent.jar file as an initContainer
combined with a shared emptyDir
volume to give the primary container access to the jar. An example YAML file for the API Gateway using the Spring Petclinic project is shown below.
spec: template: spec: containers: - name: api-gateway image: <HOST>/ctp/springcommunity/spring-petclinic-api-gateway:latest volumeMounts: - name: api-gateway-volume mountPath: /tmp/coverage initContainers: - name: java-coverage-agent image: <HOST>/ctp/java-coverage-init-container:latest command: ['sh', '-c', 'cp /agent/agent.jar /tmp/coverage/agent.jar'] volumeMounts: - name: api-gateway-volume mountPath: /tmp/coverage volumes: - name: api-gateway-volume emptyDir: {}
Note the command to copy the coverage agent from inside the initContainer
image to the shared emptyDir
volume.
Mounting a Volume Directly Method
This method mounts a volume that already contains the agent.jar and the agent.properties file directly.
There are several different types of volumes you can choose from to do this. The example below uses an NFS volume since it is not bound to any one type of cloud provider and does not require the ability access a cluster's host storage, as would be the case for hostPath
volumes, but you can use any volume type that fits your needs.
If you have multiple services with coverage agents, corresponding to different containers, each in their own pod and deployment, you need to create a corresponding PersistentVolume
and PersistentVolumeClaim
for each. An example YAML file for the API Gateway for the Spring Petclinic (which has four services in different containers) is shown below.
# ==== Persistent Volume to Mount Agent Jar and Properties File into API Gateway ==== apiVersion: v1 kind: PersistentVolume metadata: name: api-gateway-volume spec: capacity: storage: 1Gi volumeMode: Filesystem accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: nfs nfs: path: <PATH ON NFS MOUNT> server: <ADDRESS OF NFS SERVER> --- # ==== PersistentVolumeClaim for API Gateway Mount ==== apiVersion: v1 kind: PersistentVolumeClaim metadata: name: api-gateway-pvc spec: accessModes: - ReadWriteOnce storageClassName: nfs resources: requests: storage: 1Gi volumeName: "api-gateway-volume"
Make sure the agent.jar and agent.properties files are inside the <PATH ON NFS MOUNT>
on the NFS server. Since each agent.properties file will contain specifications for a separate DTP project as part of the microservices coverage workflow, a separate directory specified by <PATH ON NFS MOUNT>
should be used for each volume. The volume can then be associated with its container as in the example below.
spec: template: spec: containers: - volumeMounts: - name: api-gateway-volume mountPath: <MOUNT PATH INSIDE CONTAINER> volumes: - name: api-gateway-volume persistentVolumeClaim: claimName: api-gateway-pvc
When the container is created, agent.jar and agent.properties can be found at <MOUNT PATH INSIDE CONTAINER>
.
Invoking Coverage Agents
How you invoke the coverage agent depends on how you installed it. For example, if you created a custom image that contains the coverage agent, you can start up the agent by setting the ENTRYPOINT
or CMD
fields in the Docker file or using one of the environment variables discussed below. If you used volumes to mount the files into the container, the approach varies according to which method you used.
With the initContainer
method, you need to introduce the agent properties as well as invoke the coverage agent. For the former, you can use a ConfigMap
with another shared volume to insert an agent.properties file, which can then be referenced, or you can set the desired properties directly as part of the javaagent
argument that invokes the agent. The example below uses the <JDK_JAVA_OPTIONS>
environment variable (for more details, see this page), though you can also use <JAVA_TOOL_OPTIONS>
(for more details, see this page) or direct modification of an invoking Java call, as discussed for the NFS volume-based approach.
spec: template: spec: containers: - name: api-gateway image: <HOST>/ctp/springcommunity/spring-petclinic-api-gateway:latest command: ["./dockerize"] args: ["-wait=tcp://discovery-server:8761", "-timeout=60s", "--", "java", "org.springframework.boot.loader.JarLauncher"] env: - name: JDK_JAVA_OPTIONS value: "-javaagent:/tmp/coverage/agent.jar=settings=/config/agent.properties" volumeMounts: - name: api-gateway-configmap mountPath: /config volumes: - name: api-gateway-configmap configMap: name: api-gateway-configmap items: - key: "agent.properties" path: "agent.properties" --- apiVersion: v1 kind: ConfigMap metadata: name: api-gateway-configmap data: agent.properties: | jtest.agent.runtimeData=runtime_coverage jtest.agent.includes=org/springframework/samples/** jtest.agent.autostart=false jtest.agent.port=8050
Thanks to this NFS volume-based approach, once the container has been configured to mount in the coverage agent, the combined command
and args
fields of the container specification can be used to override the container's entry point and invoke the coverage agent. The exact mechanism for doing so will be specific to the container, but for Java it will generally entail adding the coverage agent arguments to however the underlying jar or war file for the application under test is being invoked.
For the Spring Petclinic example, the images use Dockerize to coordinate their startup order. As such, it's necessary to add javaagent
arguments as environment variables and pass them in, as demonstrated below.
spec: template: spec: containers: - command: ["./dockerize"] args: ["-wait=tcp://discovery-server:8761", "-timeout=60s", "--", "java", "$(COV_AGENT_ARGS)", "org.springframework.boot.loader.JarLauncher"] ports: - containerPort: <COVERAGE AGENT PORT> env: - name: COV_AGENT_ARGS value: "-javaagent:<PATH TO AGENT.JAR INSIDE CONTAINER>=settings=<PATH TO AGENT.PROPERTIES INSIDE CONTAINER>,runtimeData=<PATH TO RUNTIME DATA DIRECTORY IN CONTAINER>"
Also, note that in addition to whatever containerPort
must be specified for the application in the image itself, you need to expose the port for the coverage agent as well. Port 8050 is the default, but this can be changed as needed using the agent.properties file.
Accessing Coverage Agents
When accessing coverage agents, there are two main variables to consider: whether you are accessing them from within a Kubernetes cluster or from outside a Kubernetes cluster. Both cases are covered below.
Access within the Kubernetes Cluster
Unless containers share a pod (in which case they may communicate using localhost
), communication in Kubernetes should utilize services. While container A on pod A can access container B on pod B via IP within the cluster, this is a bad practice because pod B might be restarted or replaced with another pod running container B with a new IP.
The advantage of a service is that, instead of needing to directly connect container A to some functionality it needs by pointing it directly at some container B on some already known pod, you don't need to know the specifics of where container B is; container A can reference the service as a sort of proxy in order to access container B. This has the added benefit of being able to refer to the needed functionality by the name of the service rather than a specific pod IP, very much like how Docker containers on a shared network can reference each other by name.
The default type of service is ClusterIP and serves this purpose of communication between components within a Kubernetes cluster. Create one service per coverage agent to provide this connectivity. An example of the API Gateway's coverage agent that has a service defined is shown below.
apiVersion: v1 kind: Service metadata: name: api-gateway-coverage-agent spec: ports: - port: <SERVICE ACCESS PORT> protocol: TCP targetPort: <COVERAGE AGENT CONTAINERPORT> selector: app: petclinic microservice: api-gateway
Detailed documentation about services can be found at https://kubernetes.io/docs/concepts/services-networking/service/.
If all you need is to access the coverage agents from within the cluster, this is sufficient. For example, if you have CTP set up to coordinate coverage inside the cluster as well, you should be able to reach the coverage agent for the API Gateway from CTP by simply setting the coverage agent's URL in CTP as http|https://<SERVICE NAME>:<SERVICE ACCESS PORT>
(in the example above, this would be http://api-gateway-coverage-agent:<SERVICE ACCESS PORT>
).
Access from Outside the Kubernetes Cluster
Things get more complicated if you need to access resources, such as coverage agents, from outside the cluster. ClusterIP services are normally only accessible from within the cluster, so you'll need to consider other ways of reaching the coverage agents. These include:
- Port Forwarding: It is possible to use the port-forward command with kubectl to establish a connection between the host of the Kubernetes cluster and a given port on a pod. This solution scales very poorly, and since it directly links host and pod also suffers from issue of tightly coupling access to a given resource with the specific pod it is running on at the moment the connection is made. As such, this approach is unsuitable for much more than development and testing.
- Proxy Access to existing ClusterIP Services: While you can access a ClusterIP service using kubectl proxy (see this page for more information), this involves connecting with the Kubernetes API server and has several other restrictions/caveats involved that also make it unsuitable for uses other than development and testing by a cluster administrator.
- NodePort Services: NodePort services are services which have an additional third 'nodePort' (usually automatically assigned) in the range specified by
--service-node-port-range
(30000-32767 by default). You can access the NodePort service from outside the cluster using<NODE IP ADDRESS>:<nodePort VALUE>
. - LoadBalancer Services: LoadBalancer services rely on components outside the Kubernetes cluster, typically paid features provided by a cloud provider like AWS or GKE, to route external traffic into the cluster.
- Ingress: Ingress acts as a sort of entrypoint to the cluster, permitting path-based routing to multiple services among other features. It requires an Ingress controller to be installed into the cluster. See this page for more information.
As you can see, there are a variety of options to choose from. The type of Kubernetes cluster is probably the most important factor. If you intend to deploy onto a cluster hosted on AWS, LoadBalancers are probably a good choice. If your cluster is bare metal, NodePorts become more attractive.
The Spring Petclinic example uses Ingress. Ingress has two nice advantages to consider here:
- Path-based routing makes it easy to indicate the connection between a given service's endpoint and access to the coverage agent running with that service. For example, using
foo.bar
as the hostname for the endpoint to access the applicationfoobar
, you can then use the subpathfoo.bar/agent
for the endpoint for the agent running in thefoobar
container. - Ingress makes Helm charts or YAML files more portable. So long as a cluster has a working Ingress controller, you should be able to deploy with little to no modification or customization.
Ingress settings for the Spring Petclinic API Gateway look like this:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: petclinic-ingress annotations: kubernetes.io/ingress.class: nginx nginx.ingress.kubernetes.io/rewrite-target: /$1 spec: rules: - host: api-gateway.io http: paths: - path: /(.*) pathType: Prefix backend: service: name: api-gateway port: number: 8080 - path: /covagent/(.*) pathType: Prefix backend: service: name: api-gateway-coverage-agent port: number: 8051
For the Ingress to work, you need to make sure the cluster has a functioning Ingress controller present. If using Minikube or MicroK8s, this is relatively straightforward; just enable their respective Ingress addons and it should work out of the box. For a kubeadm cluster, some configuration is required:
helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace --set controller.hostNetwork=true
This command uses the existing ingress-nginx
chart to set up an Ingress controller.
Once you verify that the Ingress controller is running (look at the pods in the ingress-nginx
namespace), you can install a Helm chart.
Lastly, to access the endpoints laid out via Ingress, you need to add the hostnames to the hosts file of the computer you will use to browse to them. On a Linux machine, this is /etc/hosts
; on a Windows machine, this is C:\Windows\System32\drivers\etc\hosts
.
For example, to access the API Gatway which has hostname api-gateway.demo
, you would add the line below where <IP ADDRESS> is the IP for the Ingress controller pod.
<IP ADDRESS> api-gateway.demo
You can also find the relevant IP by using kubectl get ingress
and finding the IP listed for the Ingress resource. Note that one or the other may be empty/blank depending on the combination of Ingress controller and Kubernetes cluster.