The LLM Provider panel allows you to configure the application to use your LLM account to generate recommended fixes for static analysis violations. Functionality has been tested on GPT-4o and GPT-4o-mini from OpenAI.
Note:
- LLM integration in Visual Studio is supported in VS 2019 and later.
- The functionality does not support VB.NET.
- The "LLM Integration" feature must be enabled in your license.
- Access to your OpenAI, Azure OpenAI or another LLM provider account is required.
Note: Only LLM providers with a chat completions endpoint compatible with the OpenAI REST API are supported.
Follow these steps to enable the LLM functionality:
In your IDE, click Parasoft in the menu bar and choose Options.
- Select LLM Provider.
- Set the following preferences:
Enable: Enables generating AI-recommended fixes for static analysis violations.
- Provider: Choose between OpenAI, Azure OpenAI or Other.
- Configuration (OpenAI)
- API key: Your OpenAI token.
- Organization ID: The Organization ID you want to use when accessing the OpenAI client. Optional. If no Organization ID is supplied, the default organization ID for your account will be used.
- Test Connection: Click to test the connection to OpenAI.
- Model > Chat: The name of your OpenAI model, for example gpt-4o.
- Model > Embedding: (Optional) The name of your OpenAI embedding model.
- Configuration (Azure OpenAI)
- Resource Name: The Azure resource that contains your deployed Azure OpenAI model.
- API key: Your Azure OpenAI token.
- Deployment ID > Chat: The deployment name of your Azure OpenAI model.
- Deployment ID > Embedding: (Optional) The deployment name of your Azure OpenAI embedding model.
- Test Connection: Click to test the connection to Azure OpenAI.
- Configuration (Other)
- Base URL: The URL of the location where your LLM model is deployed.
- API key / Access token: Your LLM provider token.
- Model > Chat: The name of your LLM model.
- Model > Embedding: (Optional) The name of your LLM embedding model.
- Test Connection: Click to test the connection to the LLM provider.
LLMs may produce inaccurate information, which may affect product features dependent on LLMs.