This topic explains how to configure message proxies that send and receive messages over the JMS transport.
In this section:
See JMS Prerequisites.
When traffic is exchanged over queues, we assume that a client application sends a request to a destination queue and that a server application receives the message from the queue. The server then sends a reply message onto a second queue for the client to receive. In this scenario, the message proxy serves as a "man-in-the-middle" between the client and the server, which requires two additional queues on the messaging provider in order to facilitate the mediation. Parasoft proxies receive messages from where the client places them, records the message contents (if recording is enabled), then puts the message on the queue where the server will receive it. Similarly, the server sends the message to a queue where the proxy picks it up, records it (if recording is enabled), and places it back on the queue where the client is expecting the reply response.
As a result, it is necessary to allocate two additional queues and either:
Only one of the two application queues needs to be modified.
Specify your JMS settings as follows:
If you are defining the connection settings in this panel (rather than referencing previously-defined global JMS settings as described in Using Global JMS Connections), enter JMS provider details in the Local Settings area. You can specify the Provider URL, Initial context class, and Connection factory (be sure to add the related jars to the classpath), as well as authentication and any additional initial context JNDI properties you want to apply.
Using multiple JMS connections in a single proxyIf you want to configure connections to different JMS servers within a single proxy (e.g., you want a specific proxy connection use two queues that are deployed on two different JMS servers), create a global JMS connection for each connection, then select the appropriate connections as you configure the proxy. |
For Queues (Point-to-Point):Virtualize or SOAtest will capture messages sent to the client Destination queue and forward them to the server Reply to queue for processing. The server Destination queue is the queue that the server will place a response message on (after processing the request message). Virtualize will capture these messages and forward them to the client Reply to queue.
Queue Configuration DetailsSee Configuring Queues For Recording for tips on how to configure recording in different scenarios. |
For Topics (Publish-and-Subscribe): Virtualize or SOAtest monitors incoming requests on the client Subscribe topic and outgoing responses on the server Publish topic.
Adjusting the Worker CountEach worker creates its own connection to the JMS provider. Whenever a proxy is deployed/redeployed with a worker count that is higher than the default value of 1, you should see messages like "Started x listener(s)" in the Console (where x is the number of workers configured). Increasing the worker count can help performance under concurrency. The entire message processing chain of the proxy is parallelized, so each worker thread will do message correlations, response message generation, etc. in parallel with other threads. However, beware that if you provide a high worker count, deploying/undeploying/redeploying a proxy will take longer because there are more connections to create/destroy. Also, it is possible that the JMS provider has a limit on how many concurrent connections to allow. You should not exceed what is configured/allowed by your infrastructure. The worker count feature is equivalent to the “maxThreads” attribute in Tomcat server.xml. To adjust the server.xml file:
|
Global JMS settings that apply across a specific SOAtest and Virtualize server can be defined at the server level and referenced in these settings. See Connections Tab for details.
To use a global JMS connection, select it from the appropriate Queue or Topic box.
To review the details of a predefined global connection, click View settings.
See Configuring Queues For Recording for tips on how to configure queues in different scenarios.