AzureChatCompletion Class

Azure Chat completion class.

Initialize an AzureChatCompletion service.

Inheritance
AzureChatCompletion
AzureChatCompletion
AzureChatCompletion

Constructor

AzureChatCompletion(service_id: str | None = None, api_key: str | None = None, deployment_name: str | None = None, endpoint: str | None = None, base_url: str | None = None, api_version: str | None = None, ad_token: str | None = None, ad_token_provider: Callable[[], str | Awaitable[str]] | None = None, token_endpoint: str | None = None, default_headers: Mapping[str, str] | None = None, async_client: AsyncAzureOpenAI | None = None, env_file_path: str | None = None, env_file_encoding: str | None = None)

Parameters

Name Description
service_id
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The service ID for the Azure deployment. (Optional)

Default value: None
api_key
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The optional api key. If provided, will override the value in the env vars or .env file.

Default value: None
deployment_name
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The optional deployment. If provided, will override the value (chat_deployment_name) in the env vars or .env file.

Default value: None
endpoint
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The optional deployment endpoint. If provided will override the value in the env vars or .env file.

Default value: None
base_url
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The optional deployment base_url. If provided will override the value in the env vars or .env file.

Default value: None
api_version
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The optional deployment api version. If provided will override the value in the env vars or .env file.

Default value: None
ad_token
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The Azure Active Directory token. (Optional)

Default value: None
ad_token_provider
<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.AsyncAzureADTokenProvider>

The Azure Active Directory token provider. (Optional)

Default value: None
token_endpoint
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The token endpoint to request an Azure token. (Optional)

Default value: None
default_headers
<xref:Mapping>[str,str]

The default headers mapping of string keys to string values for HTTP requests. (Optional)

Default value: None
async_client
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.AsyncAzureOpenAI | None>>

An existing client to use. (Optional)

Default value: None
env_file_path
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

Use the environment settings file as a fallback to using env vars.

Default value: None
env_file_encoding
<xref:<xref:semantic_kernel.connectors.ai.open_ai.services.azure_chat_completion.str | None>>

The encoding of the environment settings file, defaults to 'utf-8'.

Default value: None

Methods

from_dict

Initialize an Azure OpenAI service from a dictionary of settings.

get_prompt_execution_settings_class

Create a request settings object.

split_message

Split an Azure On Your Data response into separate ChatMessageContents.

If the message does not have three contents, and those three are one each of: FunctionCallContent, FunctionResultContent, and TextContent, it will not return three messages, potentially only one or two.

The order of the returned messages is as expected by OpenAI.

from_dict

Initialize an Azure OpenAI service from a dictionary of settings.

from_dict(settings: dict[str, Any]) -> AzureChatCompletion

Parameters

Name Description
settings
Required

A dictionary of settings for the service. should contain keys: service_id, and optionally:

ad_auth, ad_token_provider, default_headers

get_prompt_execution_settings_class

Create a request settings object.

get_prompt_execution_settings_class() -> type[PromptExecutionSettings]

split_message

Split an Azure On Your Data response into separate ChatMessageContents.

If the message does not have three contents, and those three are one each of: FunctionCallContent, FunctionResultContent, and TextContent, it will not return three messages, potentially only one or two.

The order of the returned messages is as expected by OpenAI.

static split_message(message: ChatMessageContent) -> list[ChatMessageContent]

Parameters

Name Description
message
Required

Attributes

model_computed_fields

A dictionary of computed field names and their corresponding ComputedFieldInfo objects.

model_computed_fields: ClassVar[Dict[str, ComputedFieldInfo]] = {}

model_config

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'populate_by_name': True, 'validate_assignment': True}

model_fields

Metadata about the fields defined on the model, mapping of field names to [FieldInfo][pydantic.fields.FieldInfo] objects.

This replaces Model.fields from Pydantic V1.

model_fields: ClassVar[Dict[str, FieldInfo]] = {'ai_model_id': FieldInfo(annotation=str, required=True, metadata=[StringConstraints(strip_whitespace=True, to_upper=None, to_lower=None, strict=None, min_length=1, max_length=None, pattern=None)]), 'ai_model_type': FieldInfo(annotation=OpenAIModelTypes, required=False, default=<OpenAIModelTypes.CHAT: 'chat'>), 'client': FieldInfo(annotation=AsyncOpenAI, required=True), 'completion_tokens': FieldInfo(annotation=int, required=False, default=0), 'prompt_tokens': FieldInfo(annotation=int, required=False, default=0), 'service_id': FieldInfo(annotation=str, required=False, default=''), 'total_tokens': FieldInfo(annotation=int, required=False, default=0)}

ai_model_id

ai_model_id: Annotated[str, StringConstraints(strip_whitespace=True, min_length=1)]

ai_model_type

ai_model_type: OpenAIModelTypes

client

client: AsyncOpenAI

completion_tokens

completion_tokens: int

prompt_tokens

prompt_tokens: int

service_id

service_id: str

total_tokens

total_tokens: int