Text Operations - Analyze Text

Analyze Text
A synchronous API for the analysis of potentially harmful text content. Currently, it supports four categories: Hate, SelfHarm, Sexual, and Violence.

POST {endpoint}/contentsafety/text:analyze?api-version=2024-09-01

URI Parameters

Name In Required Type Description
endpoint
path True

string

Supported Cognitive Services endpoints (protocol and hostname, for example: https://.cognitiveservices.azure.com).

api-version
query True

string

The API version to use for this operation.

Request Body

Name Required Type Description
text True

string

The text to be analyzed. We support a maximum of 10k Unicode characters (Unicode code points) in the text of one request.

blocklistNames

string[]

The names of blocklists.

categories

TextCategory[]

The categories will be analyzed. If they are not assigned, a default set of analysis results for the categories will be returned.

haltOnBlocklistHit

boolean

When set to true, further analyses of harmful content will not be performed in cases where blocklists are hit. When set to false, all analyses of harmful content will be performed, whether or not blocklists are hit.

outputType

AnalyzeTextOutputType

This refers to the type of text analysis output. If no value is assigned, the default value will be "FourSeverityLevels".

Responses

Name Type Description
200 OK

AnalyzeTextResult

The request has succeeded.

Other Status Codes

Azure.Core.Foundations.ErrorResponse

An unexpected error response.

Headers

x-ms-error-code: string

Security

Ocp-Apim-Subscription-Key

Type: apiKey
In: header

OAuth2Auth

Type: oauth2
Flow: application
Token URL: https://login.microsoftonline.com/common/oauth2/v2.0/token

Scopes

Name Description
https://cognitiveservices.azure.com/.default

Examples

Analyze Text

Sample request

POST {endpoint}/contentsafety/text:analyze?api-version=2024-09-01

{
  "text": "This is text example"
}

Sample response

{
  "blocklistsMatch": [],
  "categoriesAnalysis": [
    {
      "category": "Hate",
      "severity": 0
    },
    {
      "category": "SelfHarm",
      "severity": 0
    },
    {
      "category": "Sexual",
      "severity": 0
    },
    {
      "category": "Violence",
      "severity": 0
    }
  ]
}

Definitions

Name Description
AnalyzeTextOptions

The text analysis request.

AnalyzeTextOutputType

This refers to the type of text analysis output. If no value is assigned, the default value will be "FourSeverityLevels".

AnalyzeTextResult

The text analysis response.

Azure.Core.Foundations.Error

The error object.

Azure.Core.Foundations.ErrorResponse

A response containing error details.

Azure.Core.Foundations.InnerError

An object containing more specific information about the error. As per Microsoft One API guidelines - https://github.com/Microsoft/api-guidelines/blob/vNext/Guidelines.md#7102-error-condition-responses.

TextBlocklistMatch

The result of blocklist match.

TextCategoriesAnalysis

Text analysis result.

TextCategory

The harm category supported in Text content analysis.

AnalyzeTextOptions

The text analysis request.

Name Type Default value Description
blocklistNames

string[]

The names of blocklists.

categories

TextCategory[]

The categories will be analyzed. If they are not assigned, a default set of analysis results for the categories will be returned.

haltOnBlocklistHit

boolean

When set to true, further analyses of harmful content will not be performed in cases where blocklists are hit. When set to false, all analyses of harmful content will be performed, whether or not blocklists are hit.

outputType

AnalyzeTextOutputType

FourSeverityLevels

This refers to the type of text analysis output. If no value is assigned, the default value will be "FourSeverityLevels".

text

string

The text to be analyzed. We support a maximum of 10k Unicode characters (Unicode code points) in the text of one request.

AnalyzeTextOutputType

This refers to the type of text analysis output. If no value is assigned, the default value will be "FourSeverityLevels".

Name Type Description
EightSeverityLevels

string

Output severities in eight levels, the value could be 0,1,2,3,4,5,6,7.

FourSeverityLevels

string

Output severities in four levels, the value could be 0,2,4,6.

AnalyzeTextResult

The text analysis response.

Name Type Description
blocklistsMatch

TextBlocklistMatch[]

The blocklist match details.

categoriesAnalysis

TextCategoriesAnalysis[]

Analysis result for categories.

Azure.Core.Foundations.Error

The error object.

Name Type Description
code

string

One of a server-defined set of error codes.

details

Azure.Core.Foundations.Error[]

An array of details about specific errors that led to this reported error.

innererror

Azure.Core.Foundations.InnerError

An object containing more specific information than the current object about the error.

message

string

A human-readable representation of the error.

target

string

The target of the error.

Azure.Core.Foundations.ErrorResponse

A response containing error details.

Name Type Description
error

Azure.Core.Foundations.Error

The error object.

Azure.Core.Foundations.InnerError

An object containing more specific information about the error. As per Microsoft One API guidelines - https://github.com/Microsoft/api-guidelines/blob/vNext/Guidelines.md#7102-error-condition-responses.

Name Type Description
code

string

One of a server-defined set of error codes.

innererror

Azure.Core.Foundations.InnerError

Inner error.

TextBlocklistMatch

The result of blocklist match.

Name Type Description
blocklistItemId

string

The ID of the matched item.

blocklistItemText

string

The content of the matched item.

blocklistName

string

The name of the matched blocklist.

TextCategoriesAnalysis

Text analysis result.

Name Type Description
category

TextCategory

The text analysis category.

severity

integer

The value increases with the severity of the input content. The value of this field is determined by the output type specified in the request. The output type could be ‘FourSeverityLevels’ or ‘EightSeverity Levels’, and the output value can be 0, 2, 4, 6 or 0, 1, 2, 3, 4, 5, 6, or 7.

TextCategory

The harm category supported in Text content analysis.

Name Type Description
Hate

string

The harm category for Text - Hate.

SelfHarm

string

The harm category for Text - SelfHarm.

Sexual

string

The harm category for Text - Sexual.

Violence

string

The harm category for Text - Violence.