RequestContentFilterResult Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
A content filter result associated with a single input prompt item into a generative AI system.
public class RequestContentFilterResult : System.ClientModel.Primitives.IJsonModel<Azure.AI.OpenAI.RequestContentFilterResult>, System.ClientModel.Primitives.IPersistableModel<Azure.AI.OpenAI.RequestContentFilterResult>
type RequestContentFilterResult = class
interface IJsonModel<RequestContentFilterResult>
interface IPersistableModel<RequestContentFilterResult>
Public Class RequestContentFilterResult
Implements IJsonModel(Of RequestContentFilterResult), IPersistableModel(Of RequestContentFilterResult)
- Inheritance
-
RequestContentFilterResult
- Implements
Properties
CustomBlocklists |
A collection of binary filtering outcomes for configured custom blocklists. |
Hate |
A content filter category that can refer to any content that attacks or uses pejorative or discriminatory language with reference to a person or identity group based on certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. |
IndirectAttack |
A detection result that describes attacks on systems powered by Generative AI models that can happen every time an application processes information that wasn’t directly authored by either the developer of the application or the user. |
Jailbreak |
A detection result that describes user prompt injection attacks, where malicious users deliberately exploit system vulnerabilities to elicit unauthorized behavior from the LLM. This could lead to inappropriate content generation or violations of system-imposed restrictions. |
Profanity |
A detection result that identifies whether crude, vulgar, or otherwise objection language is present in the content. |
SelfHarm |
A content filter category that describes language related to physical actions intended to purposely hurt, injure, damage one's body or kill oneself. |
Sexual |
A content filter category for language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, pregnancy, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one's will, prostitution, pornography, and abuse. |
Violence |
A content filter category for language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns and related entities, such as manufactures, associations, legislation, and so on. |
Explicit Interface Implementations
IJsonModel<RequestContentFilterResult>.Create(Utf8JsonReader, ModelReaderWriterOptions) |
Reads one JSON value (including objects or arrays) from the provided reader and converts it to a model. |
IJsonModel<RequestContentFilterResult>.Write(Utf8JsonWriter, ModelReaderWriterOptions) |
Writes the model to the provided Utf8JsonWriter. |
IPersistableModel<RequestContentFilterResult>.Create(BinaryData, ModelReaderWriterOptions) |
Converts the provided BinaryData into a model. |
IPersistableModel<RequestContentFilterResult>.GetFormatFromOptions(ModelReaderWriterOptions) |
Gets the data interchange format (JSON, Xml, etc) that the model uses when communicating with the service. |
IPersistableModel<RequestContentFilterResult>.Write(ModelReaderWriterOptions) |
Writes the model into a BinaryData. |