次の方法で共有


OpenAIChatCompletionService.GetStreamingTextContentsAsync Method

Definition

Get streaming results for the prompt using the specified execution settings. Each modality may support for different types of streaming contents.

public System.Collections.Generic.IAsyncEnumerable<Microsoft.SemanticKernel.StreamingTextContent> GetStreamingTextContentsAsync (string prompt, Microsoft.SemanticKernel.PromptExecutionSettings? executionSettings = default, Microsoft.SemanticKernel.Kernel? kernel = default, System.Threading.CancellationToken cancellationToken = default);
abstract member GetStreamingTextContentsAsync : string * Microsoft.SemanticKernel.PromptExecutionSettings * Microsoft.SemanticKernel.Kernel * System.Threading.CancellationToken -> System.Collections.Generic.IAsyncEnumerable<Microsoft.SemanticKernel.StreamingTextContent>
override this.GetStreamingTextContentsAsync : string * Microsoft.SemanticKernel.PromptExecutionSettings * Microsoft.SemanticKernel.Kernel * System.Threading.CancellationToken -> System.Collections.Generic.IAsyncEnumerable<Microsoft.SemanticKernel.StreamingTextContent>
Public Function GetStreamingTextContentsAsync (prompt As String, Optional executionSettings As PromptExecutionSettings = Nothing, Optional kernel As Kernel = Nothing, Optional cancellationToken As CancellationToken = Nothing) As IAsyncEnumerable(Of StreamingTextContent)

Parameters

prompt
String

The prompt to complete.

executionSettings
PromptExecutionSettings

The AI execution settings (optional).

kernel
Kernel

The Kernel containing services, plugins, and other state for use throughout the operation.

cancellationToken
CancellationToken

The CancellationToken to monitor for cancellation requests. The default is None.

Returns

Streaming list of different completion streaming string updates generated by the remote model

Implements

Applies to