Compartilhar via


AgentsClient.CreateRunStreamingAsync Method

Definition

Begins a new streaming ThreadRun that evaluates a AgentThread using a specified Agent.

public virtual System.ClientModel.AsyncCollectionResult<Azure.AI.Projects.StreamingUpdate> CreateRunStreamingAsync (string threadId, string assistantId, string overrideModelName = default, string overrideInstructions = default, string additionalInstructions = default, System.Collections.Generic.IEnumerable<Azure.AI.Projects.ThreadMessage> additionalMessages = default, System.Collections.Generic.IEnumerable<Azure.AI.Projects.ToolDefinition> overrideTools = default, float? temperature = default, float? topP = default, int? maxPromptTokens = default, int? maxCompletionTokens = default, Azure.AI.Projects.TruncationObject truncationStrategy = default, BinaryData toolChoice = default, BinaryData responseFormat = default, System.Collections.Generic.IReadOnlyDictionary<string,string> metadata = default, System.Threading.CancellationToken cancellationToken = default);
abstract member CreateRunStreamingAsync : string * string * string * string * string * seq<Azure.AI.Projects.ThreadMessage> * seq<Azure.AI.Projects.ToolDefinition> * Nullable<single> * Nullable<single> * Nullable<int> * Nullable<int> * Azure.AI.Projects.TruncationObject * BinaryData * BinaryData * System.Collections.Generic.IReadOnlyDictionary<string, string> * System.Threading.CancellationToken -> System.ClientModel.AsyncCollectionResult<Azure.AI.Projects.StreamingUpdate>
override this.CreateRunStreamingAsync : string * string * string * string * string * seq<Azure.AI.Projects.ThreadMessage> * seq<Azure.AI.Projects.ToolDefinition> * Nullable<single> * Nullable<single> * Nullable<int> * Nullable<int> * Azure.AI.Projects.TruncationObject * BinaryData * BinaryData * System.Collections.Generic.IReadOnlyDictionary<string, string> * System.Threading.CancellationToken -> System.ClientModel.AsyncCollectionResult<Azure.AI.Projects.StreamingUpdate>
Public Overridable Function CreateRunStreamingAsync (threadId As String, assistantId As String, Optional overrideModelName As String = Nothing, Optional overrideInstructions As String = Nothing, Optional additionalInstructions As String = Nothing, Optional additionalMessages As IEnumerable(Of ThreadMessage) = Nothing, Optional overrideTools As IEnumerable(Of ToolDefinition) = Nothing, Optional temperature As Nullable(Of Single) = Nothing, Optional topP As Nullable(Of Single) = Nothing, Optional maxPromptTokens As Nullable(Of Integer) = Nothing, Optional maxCompletionTokens As Nullable(Of Integer) = Nothing, Optional truncationStrategy As TruncationObject = Nothing, Optional toolChoice As BinaryData = Nothing, Optional responseFormat As BinaryData = Nothing, Optional metadata As IReadOnlyDictionary(Of String, String) = Nothing, Optional cancellationToken As CancellationToken = Nothing) As AsyncCollectionResult(Of StreamingUpdate)

Parameters

threadId
String

Identifier of the thread.

assistantId
String

The ID of the agent that should run the thread.

overrideModelName
String

The overridden model name that the agent should use to run the thread.

overrideInstructions
String

The overridden system instructions that the agent should use to run the thread.

additionalInstructions
String

Additional instructions to append at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions.

additionalMessages
IEnumerable<ThreadMessage>

Adds additional messages to the thread before creating the run.

overrideTools
IEnumerable<ToolDefinition>

The overridden list of enabled tools that the agent should use to run the thread.

temperature
Nullable<Single>

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

topP
Nullable<Single>

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

We generally recommend altering this or temperature but not both.

maxPromptTokens
Nullable<Int32>

The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run. If the run exceeds the number of prompt tokens specified, the run will end with status incomplete. See incomplete_details for more info.

maxCompletionTokens
Nullable<Int32>

The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run. If the run exceeds the number of completion tokens specified, the run will end with status incomplete. See incomplete_details for more info.

truncationStrategy
TruncationObject

The strategy to use for dropping messages as the context windows moves forward.

toolChoice
BinaryData

Controls whether or not and which tool is called by the model.

responseFormat
BinaryData

Specifies the format that the model must output.

metadata
IReadOnlyDictionary<String,String>

A set of up to 16 key/value pairs that can be attached to an object, used for storing additional information about that object in a structured format. Keys may be up to 64 characters in length and values may be up to 512 characters in length.

cancellationToken
CancellationToken

The cancellation token to use.

Returns

Exceptions

threadId or assistantId is null.

threadId is an empty string, and was expected to be non-empty.

Applies to