AzureChatCompletion Class

Azure Chat completion class.

Initialize an AzureChatCompletion service.

Constructor

AzureChatCompletion(service_id: str | None = None, api_key: str | None = None, deployment_name: str | None = None, endpoint: str | None = None, base_url: str | None = None, api_version: str | None = None, ad_token: str | None = None, ad_token_provider: Callable[[], str | Awaitable[str]] | None = None, token_endpoint: str | None = None, default_headers: Mapping[str, str] | None = None, async_client: AsyncAzureOpenAI | None = None, env_file_path: str | None = None, env_file_encoding: str | None = None, instruction_role: str | None = None)

Parameters

Name Description
service_id
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The service ID for the Azure deployment. (Optional)

Default value: None
api_key
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The optional api key. If provided, will override the value in the env vars or .env file.

Default value: None
deployment_name
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The optional deployment. If provided, will override the value (chat_deployment_name) in the env vars or .env file.

Default value: None
endpoint
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The optional deployment endpoint. If provided will override the value in the env vars or .env file.

Default value: None
base_url
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The optional deployment base_url. If provided will override the value in the env vars or .env file.

Default value: None
api_version
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The optional deployment api version. If provided will override the value in the env vars or .env file.

Default value: None
ad_token
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The Azure Active Directory token. (Optional)

Default value: None
ad_token_provider
<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.AsyncAzureADTokenProvider>

The Azure Active Directory token provider. (Optional)

Default value: None
token_endpoint
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The token endpoint to request an Azure token. (Optional)

Default value: None
default_headers
<xref:Mapping>[str,str]

The default headers mapping of string keys to string values for HTTP requests. (Optional)

Default value: None
async_client
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.AsyncAzureOpenAI | None>>

An existing client to use. (Optional)

Default value: None
env_file_path
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

Use the environment settings file as a fallback to using env vars.

Default value: None
env_file_encoding
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The encoding of the environment settings file, defaults to 'utf-8'.

Default value: None
instruction_role
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The role to use for 'instruction' messages, for example, summarization prompts could use developer or system. (Optional)

Default value: None

Methods

from_dict

Initialize an Azure OpenAI service from a dictionary of settings.

get_prompt_execution_settings_class

Create a request settings object.

split_message

Split an Azure On Your Data response into separate ChatMessageContents.

If the message does not have three contents, and those three are one each of: FunctionCallContent, FunctionResultContent, and TextContent, it will not return three messages, potentially only one or two.

The order of the returned messages is as expected by OpenAI.

from_dict

Initialize an Azure OpenAI service from a dictionary of settings.

from_dict(settings: dict[str, Any]) -> AzureChatCompletion

Parameters

Name Description
settings
Required

A dictionary of settings for the service. should contain keys: service_id, and optionally:

ad_auth, ad_token_provider, default_headers

get_prompt_execution_settings_class

Create a request settings object.

get_prompt_execution_settings_class() -> type[PromptExecutionSettings]

split_message

Split an Azure On Your Data response into separate ChatMessageContents.

If the message does not have three contents, and those three are one each of: FunctionCallContent, FunctionResultContent, and TextContent, it will not return three messages, potentially only one or two.

The order of the returned messages is as expected by OpenAI.

static split_message(message: ChatMessageContent) -> list[ChatMessageContent]

Parameters

Name Description
message
Required

Attributes

ai_model_id

ai_model_id: Annotated[str, StringConstraints(strip_whitespace=True, min_length=1)]

ai_model_type

ai_model_type: OpenAIModelTypes

client

client: AsyncOpenAI

completion_tokens

completion_tokens: int

instruction_role

instruction_role: str

prompt_tokens

prompt_tokens: int

service_id

service_id: str

total_tokens

total_tokens: int