The LLM Provider panel allows you to configure the application to use your LLM account to generate recommended fixes for static analysis violations and failing unit tests, as well as improvements for unit tests and to interact with the AI Assistant. The functionality has been tested on GPT-4o, GPT-4o-mini and GPT-4.1 from OpenAI.
Prerequisites:
- The "LLM Integration" feature must be enabled in your license.
- Access to your OpenAI, Azure OpenAI or another LLM provider account is required.Note:
- Only
LLM providers
withare supported if they:
- Have a chat completions endpoint compatible with the OpenAI REST API
- .
- Provide an embedding endpoint that is compatible with the OpenAI REST API (an embedding model with at least an 8,000 token limit is recommended for best performance).
- Are not reasoning models.
Follow these steps to enable the LLM functionality:
...
Note |
---|
LLMs may produce inaccurate information, which may affect product features dependent on LLMs. |
Note | ||
---|---|---|
| ||
Eclipse users: If you are going through a proxy to connect to your LLM account, your Eclipse proxy settings are used, not your Parasoft proxy settings. |