This section covers the AI Data Bank tool, an LLM-powered message analyzer that uses your LLM provider to power data banking that easily adapts to changing conditions by using natural language to define what conditions you want to look for. It is most commonly connected to a tool that sends or receives messages in order to extract data from those messages for use by another tool in the test scenario. Due to its effect on performance and the ability to generate load, the AI Data Bank tool is not recommended for use with Load Test.

You must configure the application to be able to use your LLM provider account in the LLM Provider preferences before you can use the AI Assistant. OpenAI, Azure OpenAI, and other LLM providers with similar functionality are supported. See LLM Provider preferences to determine whether your LLM provider can be used. Be aware that LLMs can provide inaccurate information, which can affect this feature. Due diligence is recommended.

Understanding the AI Data Bank Tool

The AI Data Bank tool is commonly chained to a Message Responder as an output to extract values from a header or payload, but it can also be added to other tools or created as a stand-alone test. The AI Data Bank tool provides support for complex extraction needs using the adaptability of AI through your LLM provider.

The AI Data Bank tool is designed to support multiple queries to create complex extractions. It can be added via the Add Output wizard (SOAtest or Virtualize) or as a standalone test or responder (SOAtest or Virtualize).

Configuring the AI Data Bank Tool

The AI Data Bank tool's settings consist of the following tabs:

  • Configuration: This tab is used to create and configure queries for the LLM.
  • Expected Document: Specifies the expected document, creating a template from which you can select elements. If AI Data Bank tool receives a valid document (for example, from traffic or from the tool it is attached to), this panel will be populated automatically. Alternatively, you can copy a sample message into tab. Note that the expected document does not get saved by default; if you want to save it, enable Save expected document.
  • Input: Specifies either text or a file on which the tool should operate. If the AI Data Bank tool is chained to another tool, this tab does not appear, and the AI Data Bank tool uses the output of that tool as its input.

To configure the AI Data Bank tool:

  1. Click Add in the AI Data Bank tool’s Configuration tab. A new query will be added.

  2. Queries are automatically assigned a name, but you can change this to make it easier to track what multiple prompts are meant to do.
  3. Enter the query you want the AI to use to determine what data to exact in the AI Query Prompt field.

    Best Practices

    • The best results are generally achieved when queries are clear and specific.
    • Simple queries tend to be more effective, for example, "What is the largest value in the table?"
    • When asking for a count of items, it helps to add an instruction to "return zero if none are found" if that is an expected possibility.
    • If the AI returns a list of values when a single string is expected, try asking it to "return the value as a single string."
  4. Click the Data Source Column tab to configure which column in the data source should store the extracted data.
    • Custom column name: Choose this option to specify the name of the data source column in which to store the value. This is the name you will use to reference the value in other places. For example, if it is stored in a data source column named My Value, you would choose My Value as the parameterized value. You could also reference it as ${My Value} in literal or multiple response views.
    • Writable data source columnChoose this option to store the value in a writable data source column (see Configuring a Writable Data Source in SOAtest or Configuring a Writable Data Source in Virtualize). This allows you to store an array of values. Other tools can then iterate over the stored values.

    • Variable: Choose this option to save the value in the specified variable so it can be reused across the current Test, Responder, or Action suites. The variable must already be added to the current suite as described in Defining Variables (SOAtest) or Defining Variables (Virtualize). Any values set in this manner will override any local variable values specified in the Responder suite or Action suite properties panel.

  5. Save your changes.

Evaluating an Extraction

You can evaluate the AI Data Bank tool's findings to learn more about why a query worked or didn't work the way you expected. After running your scenario, open the AI Data Bank tool and choose the query you want to evaluate. Click Evaluate and review the report that appears.

You can chain an Edit tool to the results output of the AI Data Bank tool to view the results of all queries and the analysis reason for each extraction in one place.

If you want to change your query, you can do so and click Evaluate again, which will prompt the LLM with the updated query against the previous input. In this way, you can fine-tune your queries until they are functioning the way you want.

  • No labels