...
The LLM Provider panel allows you to configure the application to use an LLM model through your OpenAI, Azure OpenAI, or other LLM provider account to assist you when creating test scenarios and when using the AI Assistant. The LLM model that is used must support function calling. Only LLM providers with a chat completions endpoint compatible with the OpenAI REST API are supported. Additionally, the LLM provider must provide an embedding endpoint that is compatible with the OpenAI REST API. An embedding model with at least an 8,000 token limit is recommended for best performance. Functionality has been tested on OpenAI's GPT-4o and GPT-4o-mini and the embedding model text-embedding-ada-002.
...