你当前正在访问 Microsoft Azure Global Edition 技术文档网站。 如果需要访问由世纪互联运营的 Microsoft Azure 中国技术文档网站,请访问 https://docs.azure.cn。
RequestImageContentFilterResult Class
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
A content filter result for an image generation operation's input request content.
public class RequestImageContentFilterResult : Azure.AI.OpenAI.ResponseImageContentFilterResult, System.ClientModel.Primitives.IJsonModel<Azure.AI.OpenAI.RequestImageContentFilterResult>, System.ClientModel.Primitives.IPersistableModel<Azure.AI.OpenAI.RequestImageContentFilterResult>
type RequestImageContentFilterResult = class
inherit ResponseImageContentFilterResult
interface IJsonModel<RequestImageContentFilterResult>
interface IPersistableModel<RequestImageContentFilterResult>
Public Class RequestImageContentFilterResult
Inherits ResponseImageContentFilterResult
Implements IJsonModel(Of RequestImageContentFilterResult), IPersistableModel(Of RequestImageContentFilterResult)
- Inheritance
- Implements
Properties
CustomBlocklists |
A collection of binary filtering outcomes for configured custom blocklists. |
Hate |
A content filter category that can refer to any content that attacks or uses pejorative or discriminatory language with reference to a person or identity group based on certain differentiating attributes of these groups including but not limited to race, ethnicity, nationality, gender identity and expression, sexual orientation, religion, immigration status, ability status, personal appearance, and body size. (Inherited from ResponseImageContentFilterResult) |
Jailbreak |
A detection result that describes user prompt injection attacks, where malicious users deliberately exploit system vulnerabilities to elicit unauthorized behavior from the LLM. This could lead to inappropriate content generation or violations of system-imposed restrictions. |
Profanity |
A detection result that identifies whether crude, vulgar, or otherwise objection language is present in the content. |
SelfHarm |
A content filter category that describes language related to physical actions intended to purposely hurt, injure, damage one's body or kill oneself. (Inherited from ResponseImageContentFilterResult) |
Sexual |
A content filter category for language related to anatomical organs and genitals, romantic relationships, acts portrayed in erotic or affectionate terms, pregnancy, physical sexual acts, including those portrayed as an assault or a forced sexual violent act against one's will, prostitution, pornography, and abuse. (Inherited from ResponseImageContentFilterResult) |
Violence |
A content filter category for language related to physical actions intended to hurt, injure, damage, or kill someone or something; describes weapons, guns and related entities, such as manufactures, associations, legislation, and so on. (Inherited from ResponseImageContentFilterResult) |