次の方法で共有


MistralAIPromptExecutionSettings Class

Definition

Mistral Execution Settings.

[System.Text.Json.Serialization.JsonNumberHandling(System.Text.Json.Serialization.JsonNumberHandling.AllowReadingFromString)]
public sealed class MistralAIPromptExecutionSettings : Microsoft.SemanticKernel.PromptExecutionSettings
[<System.Text.Json.Serialization.JsonNumberHandling(System.Text.Json.Serialization.JsonNumberHandling.AllowReadingFromString)>]
type MistralAIPromptExecutionSettings = class
    inherit PromptExecutionSettings
Public NotInheritable Class MistralAIPromptExecutionSettings
Inherits PromptExecutionSettings
Inheritance
MistralAIPromptExecutionSettings
Attributes

Constructors

MistralAIPromptExecutionSettings()

Properties

ApiVersion

The API version to use.

ExtensionData

Extra properties that may be included in the serialized execution settings.

(Inherited from PromptExecutionSettings)
FunctionChoiceBehavior

Gets or sets the behavior defining the way functions are chosen by LLM and how they are invoked by AI connectors.

(Inherited from PromptExecutionSettings)
IsFrozen

Gets a value that indicates whether the PromptExecutionSettings are currently modifiable.

(Inherited from PromptExecutionSettings)
MaxTokens

Default: null The maximum number of tokens to generate in the completion.

ModelId

Model identifier. This identifies the AI model these settings are configured for e.g., gpt-4, gpt-3.5-turbo

(Inherited from PromptExecutionSettings)
RandomSeed

Default: null The seed to use for random sampling. If set, different calls will generate deterministic results.

SafePrompt

Default: false Whether to inject a safety prompt before all conversations.

ServiceId

Service identifier. This identifies the service these settings are configured for e.g., azure_openai_eastus, openai, ollama, huggingface, etc.

(Inherited from PromptExecutionSettings)
Temperature

Default: 0.7 What sampling temperature to use, between 0.0 and 1.0. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

ToolCallBehavior

Gets or sets the behavior for how tool calls are handled.

TopP

Default: 1 Nucleus sampling, where the model considers the results of the tokens with top_p probability mass.So 0.1 means only the tokens comprising the top 10% probability mass are considered.

Methods

Clone()

Creates a new PromptExecutionSettings object that is a copy of the current instance.

Freeze()

Makes the current PromptExecutionSettings unmodifiable and sets its IsFrozen property to true.

FromExecutionSettings(PromptExecutionSettings)

Create a new settings object with the values from another settings object.

ThrowIfFrozen()

Throws an InvalidOperationException if the PromptExecutionSettings are frozen.

(Inherited from PromptExecutionSettings)

Applies to