AzureOpenAIModelOptions interface

Options for configuring an OpenAIModel to call an Azure OpenAI hosted model.

Extends

Properties

azureADTokenProvider

Optional. A function that returns an access token for Microsoft Entra (formerly known as Azure Active Directory), which will be invoked on every request.

azureApiKey

API key to use when making requests to Azure OpenAI.

azureApiVersion

Optional. Version of the API being called. Defaults to 2023-05-15.

azureDefaultDeployment

Default name of the Azure OpenAI deployment (model) to use.

azureEndpoint

Deployment endpoint to use.

Inherited Properties

clientOptions

Optional. Custom client options to use when calling the OpenAI API.

logRequests

Optional. Whether to log requests to the console.

maxRetries

Optional. Maximum number of retries to use when calling the OpenAI API.

requestConfig
responseFormat

Optional. Forces the model return a specific response format.

retryPolicy
seed

Optional. A static seed to use when making model calls.

stream

Optional. Whether the models responses should be streamed back using Server Sent Events (SSE.)

useSystemMessages

Optional. Whether to use system messages when calling the OpenAI API.

Property Details

azureADTokenProvider

Optional. A function that returns an access token for Microsoft Entra (formerly known as Azure Active Directory), which will be invoked on every request.

azureADTokenProvider?: () => Promise<string>

Property Value

() => Promise<string>

azureApiKey

API key to use when making requests to Azure OpenAI.

azureApiKey?: string

Property Value

string

azureApiVersion

Optional. Version of the API being called. Defaults to 2023-05-15.

azureApiVersion?: string

Property Value

string

azureDefaultDeployment

Default name of the Azure OpenAI deployment (model) to use.

azureDefaultDeployment: string

Property Value

string

azureEndpoint

Deployment endpoint to use.

azureEndpoint: string

Property Value

string

Inherited Property Details

clientOptions

Optional. Custom client options to use when calling the OpenAI API.

clientOptions?: ClientOptions

Property Value

ClientOptions

Inherited From BaseOpenAIModelOptions.clientOptions

logRequests

Optional. Whether to log requests to the console.

logRequests?: boolean

Property Value

boolean

Remarks

This is useful for debugging prompts and defaults to false.

Inherited From BaseOpenAIModelOptions.logRequests

maxRetries

Optional. Maximum number of retries to use when calling the OpenAI API.

maxRetries?: number

Property Value

number

Remarks

The default is to retry twice.

Inherited From BaseOpenAIModelOptions.maxRetries

requestConfig

Warning

This API is now deprecated.

Optional. Request options to use when calling the OpenAI API.

requestConfig?: AxiosRequestConfig<any>

Property Value

AxiosRequestConfig<any>

Inherited From BaseOpenAIModelOptions.requestConfig

responseFormat

Optional. Forces the model return a specific response format.

responseFormat?: { type: "json_object" }

Property Value

{ type: "json_object" }

Remarks

This can be used to force the model to always return a valid JSON object.

Inherited From BaseOpenAIModelOptions.responseFormat

retryPolicy

Warning

This API is now deprecated.

Optional. Retry policy to use when calling the OpenAI API.

retryPolicy?: number[]

Property Value

number[]

Remarks

Use maxRetries instead.

Inherited From BaseOpenAIModelOptions.retryPolicy

seed

Optional. A static seed to use when making model calls.

seed?: number

Property Value

number

Remarks

The default is to use a random seed. Specifying a seed will make the model deterministic.

Inherited From BaseOpenAIModelOptions.seed

stream

Optional. Whether the models responses should be streamed back using Server Sent Events (SSE.)

stream?: boolean

Property Value

boolean

Remarks

Defaults to false.

Inherited From BaseOpenAIModelOptions.stream

useSystemMessages

Optional. Whether to use system messages when calling the OpenAI API.

useSystemMessages?: boolean

Property Value

boolean

Remarks

The current generation of models tend to follow instructions from user messages better then system messages so the default is false, which causes any system message in the prompt to be sent as user messages instead.

Inherited From BaseOpenAIModelOptions.useSystemMessages