ContentSafetyModelFactory.TextAnalyzeSeverityResult Method
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Initializes a new instance of TextAnalyzeSeverityResult.
public static Azure.AI.ContentSafety.TextAnalyzeSeverityResult TextAnalyzeSeverityResult (Azure.AI.ContentSafety.TextCategory category = default, int severity = 0);
static member TextAnalyzeSeverityResult : Azure.AI.ContentSafety.TextCategory * int -> Azure.AI.ContentSafety.TextAnalyzeSeverityResult
Public Shared Function TextAnalyzeSeverityResult (Optional category As TextCategory = Nothing, Optional severity As Integer = 0) As TextAnalyzeSeverityResult
Parameters
- category
- TextCategory
The text category.
- severity
- Int32
The higher the severity of input content, the larger this value is. The values could be: 0,2,4,6.
Returns
A new TextAnalyzeSeverityResult instance for mocking.
Applies to
Collaborate with us on GitHub
The source for this content can be found on GitHub, where you can also create and review issues and pull requests. For more information, see our contributor guide.
Azure SDK for .NET