This section covers the AI Data Bank tool, an LLM-powered message analyzer that uses your LLM provider to power data banking that easily adapts to changing conditions by using natural language to define what conditions you want to look for. It is most commonly connected to a tool that sends or receives messages in order to extract data from those messages for use by another tool in the test scenario. Due to its effect on performance and the ability to generate load, the AI Data Bank tool is not recommended for use with Load Test.

You must configure the application to be able to use your LLM provider account in the LLM Provider preferences before you can use the AI Assistant. OpenAI, Azure OpenAI, and other LLM providers with similar functionality are supported. See LLM Provider preferences to determine whether your LLM provider can be used. Be aware that LLMs can provide inaccurate information, which can affect this feature. Due diligence is recommended.

Understanding the AI Data Bank Tool

The AI Data Bank tool is commonly chained to a Message Responder as an output to extract values from a header or payload, but it can also be added to other tools or created as a stand-alone test. The AI Data Bank tool provides support for complex extraction needs using the adaptability of AI through your LLM provider.

The AI Data Bank tool is designed to support multiple queries to create complex extractions. It can be added via the Add Output wizard (SOAtest or Virtualize) or as a standalone test or responder (SOAtest or Virtualize).

Configuring the AI Data Bank Tool

The AI Data Bank tool's settings consist of the following tabs:

To configure the AI Data Bank tool:

  1. Click Add in the AI Data Bank tool’s Configuration tab. A new query will be added.

  2. Queries are automatically assigned a name, but you can change this to make it easier to track what multiple prompts are meant to do.
  3. Enter the query you want the AI to use to determine what data to exact in the AI Query Prompt field.
    • The best results are generally achieved when queries are clear and specific.
    • Simple queries tend to be more effective, for example, "What is the largest value in the table?"
    • When asking for a count of items, it helps to add an instruction to "return zero if none are found" if that is an expected possibility.
    • If the AI returns a list of values when a single string is expected, try asking it to "return the value as a single string."
  4. Click the Data Source Column tab to configure which column in the data source should store the extracted data.
  5. Save your changes.

Evaluating an Extraction

You can evaluate the AI Data Bank tool's findings to learn more about why a query worked or didn't work the way you expected. After running your scenario, open the AI Data Bank tool and choose the query you want to evaluate. Click Evaluate and review the report that appears.

You can chain an Edit tool to the results output of the AI Data Bank tool to view the results of all queries and the analysis reason for each extraction in one place.

If you want to change your query, you can do so and click Evaluate again, which will prompt the LLM with the updated query against the previous input. In this way, you can fine-tune your queries until they are functioning the way you want.