This section covers the AI Data Bank tool, an LLM-powered message analyzer that uses your LLM provider to power data banking that easily adapts to changing conditions by using natural language to define what conditions you want to look for. It is most commonly connected to a tool that sends or receives messages in order to extract data from those messages for use by another tool in the test scenario. Due to its effect on performance and the ability to generate load, the AI Data Bank tool is not recommended for use with Load Test.
You must configure the application to be able to use your LLM provider account in the LLM Provider preferences before you can use the AI Assistant. OpenAI, Azure OpenAI, and other LLM providers with similar functionality are supported. See LLM Provider preferences to determine whether your LLM provider can be used. Be aware that LLMs can provide inaccurate information, which can affect this feature. Due diligence is recommended.
The AI Data Bank tool is commonly chained to a Message Responder as an output to extract values from a header or payload, but it can also be added to other tools or created as a stand-alone test. The AI Data Bank tool provides support for complex extraction needs using the adaptability of AI through your LLM provider.
The AI Data Bank tool is designed to support multiple queries to create complex extractions. It can be added via the Add Output wizard (SOAtest or Virtualize) or as a standalone test or responder (SOAtest or Virtualize).
The AI Data Bank tool's settings consist of the following tabs:
To configure the AI Data Bank tool:
|
${My Value}
in literal or multiple response views.Writable data source column: Choose this option to store the value in a writable data source column (see Configuring a Writable Data Source in SOAtest or Configuring a Writable Data Source in Virtualize). This allows you to store an array of values. Other tools can then iterate over the stored values.
Variable: Choose this option to save the value in the specified variable so it can be reused across the current Test, Responder, or Action suites. The variable must already be added to the current suite as described in Defining Variables (SOAtest) or Defining Variables (Virtualize). Any values set in this manner will override any local variable values specified in the Responder suite or Action suite properties panel.
You can evaluate the AI Data Bank tool's findings to learn more about why a query worked or didn't work the way you expected. After running your scenario, open the AI Data Bank tool and choose the query you want to evaluate. Click Evaluate and review the report that appears.
You can chain an Edit tool to the results output of the AI Data Bank tool to view the results of all queries and the analysis reason for each extraction in one place. |
If you want to change your query, you can do so and click Evaluate again, which will prompt the LLM with the updated query against the previous input. In this way, you can fine-tune your queries until they are functioning the way you want.