Freigeben über


HuggingFacePromptExecutionSettings Class

Definition

HuggingFace Execution Settings.

public sealed class HuggingFacePromptExecutionSettings : Microsoft.SemanticKernel.PromptExecutionSettings
type HuggingFacePromptExecutionSettings = class
    inherit PromptExecutionSettings
Public NotInheritable Class HuggingFacePromptExecutionSettings
Inherits PromptExecutionSettings
Inheritance
HuggingFacePromptExecutionSettings

Constructors

HuggingFacePromptExecutionSettings()

Properties

Details

Show details of the generation. Including usage.

DoSample

(Optional: True). Bool. Whether or not to use sampling, use greedy decoding otherwise.

ExtensionData

Extra properties that may be included in the serialized execution settings.

(Inherited from PromptExecutionSettings)
FunctionChoiceBehavior

Gets or sets the behavior defining the way functions are chosen by LLM and how they are invoked by AI connectors.

(Inherited from PromptExecutionSettings)
IsFrozen

Gets a value that indicates whether the PromptExecutionSettings are currently modifiable.

(Inherited from PromptExecutionSettings)
LogProbs

Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.

MaxNewTokens

Int (0-250). The amount of new tokens to be generated, this does not include the input length it is a estimate of the size of generated text you want. Each new tokens slows down the request, so look for balance between response times and length of text generated.

MaxTime

(Default: None). Float (0-120.0). The amount of time in seconds that the query should take maximum. Network can cause some overhead so it will be a soft limit. Use that in combination with max_new_tokens for best results.

MaxTokens

The maximum number of tokens to generate in the completion.

ModelId

Model identifier. This identifies the AI model these settings are configured for e.g., gpt-4, gpt-3.5-turbo

(Inherited from PromptExecutionSettings)
PresencePenalty

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics

RepetitionPenalty

(Default: None). Float (0.0-100.0). The more a token is used within generation the more it is penalized to not be picked in successive generation passes.

ResultsPerPrompt

(Default: 1). Integer. The number of proposition you want to be returned.

ReturnFullText

(Default: True). Bool. If set to False, the return results will not contain the original query making it easier for prompting.

Seed

The seed to use for generating a similar output.

ServiceId

Service identifier. This identifies the service these settings are configured for e.g., azure_openai_eastus, openai, ollama, huggingface, etc.

(Inherited from PromptExecutionSettings)
Stop

Up to 4 sequences where the API will stop generating further tokens.

Temperature

(Default: 1.0). Float (0.0-100.0). The temperature of the sampling operation. 1 means regular sampling, 0 means always take the highest score, 100.0 is getting closer to uniform probability.

TopK

(Default: None). Integer to define the top tokens considered within the sample operation to create new text.

TopLogProbs

An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to true if this parameter is used.

TopP

(Default: None). Float to define the tokens that are within the sample operation of text generation. Add tokens in the sample for more probable to least probable until the sum of the probabilities is greater than top_p.

UseCache

(Default: true). Boolean. There is a cache layer on the inference API to speedup requests we have already seen. Most models can use those results as is as models are deterministic (meaning the results will be the same anyway). However if you use a non deterministic model, you can set this parameter to prevent the caching mechanism from being used resulting in a real new query.

WaitForModel

(Default: false) Boolean. If the model is not ready, wait for it instead of receiving 503. It limits the number of requests required to get your inference done. It is advised to only set this flag to true after receiving a 503 error as it will limit hanging in your application to known places.

Methods

Clone()

Creates a new PromptExecutionSettings object that is a copy of the current instance.

Freeze()

Makes the current PromptExecutionSettings unmodifiable and sets its IsFrozen property to true.

(Inherited from PromptExecutionSettings)
FromExecutionSettings(PromptExecutionSettings)

Gets the specialization for the HuggingFace execution settings.

ThrowIfFrozen()

Throws an InvalidOperationException if the PromptExecutionSettings are frozen.

(Inherited from PromptExecutionSettings)

Applies to