This section covers the AI Assertor tool, an LLM-powered message analyzer that uses your LLM provider to determine whether a message satisfies one or more assertions or requirements as written by you using natural language. It is most commonly connected to a tool that sends or receives messages in order to verify the content of those messages. Since it affects performance and the ability to generate load, the AI Assertor tool is skipped when used with Load Test.
This tool requires a validate license. Additionally, you must configure your LLM provider account information in the LLM Provider preferences. OpenAI, Azure OpenAI, and other LLM providers with similar functionality are supported. See LLM Provider preferences to determine whether your LLM provider can be used. Be aware that LLMs can provide inaccurate information, which can affect this feature. Due diligence is recommended.
An AI Assertor is commonly chained to a SOAP or REST Client or a Messaging Client as an output to a test to verify a payload or traffic, but it can also be created as a stand-alone test. The AI Assertor provides support for complex validation needs using the adaptability of AI through your LLM provider.
The AI Assertor is designed to support multiple validation assertions to create complex verifications. It can be added via the Add Output wizard or as a stand-alone Standard test.
The AI Assertor's tool settings consist of the following tabs:
To configure AI Assertor:
The best results are generally achieved when assertions are short and clear. If you have multiple conditions that you want to validate, it is recommended that you create multiple short assertions rather than one longer one. Not only will this be less likely to confuse the AI, but it will also make targeted changes to your assertions easier. In addition, there is no need to request that the AI "check" or "verify" anything. Simply state the expectation, for example, "Cart has two items." |
You can evaluate the AI Assertor's findings to learn more about why an assertion passed or failed. After running your test, open the AI Assertor and choose the assertion you want to evaluate. Click Evaluate and review the report that appears.
You can chain an Edit tool to the results output of the AI Assertor to view all the passing and failing assertions and the analysis reason for each of them in one place. |
If you want to change your assertion, you can do so and click Evaluate again, which will query the LLM with the updated assertion against the previous input. In this way, you can fine-tune your assertions until they are functioning the way you want. If you find the AI struggling to fully comprehend your assertion, providing it with an example or two of a passing or failing condition can help.