TextAnalyticsClient Class
The Language service API is a suite of natural language processing (NLP) skills built with the best-in-class Microsoft machine learning algorithms. The API can be used to analyze unstructured text for tasks such as sentiment analysis, key phrase extraction, entities recognition, and language detection, and more.
Further documentation can be found in https://docs.microsoft.com/azure/cognitive-services/language-service/overview
- Inheritance
-
azure.ai.textanalytics._base_client.TextAnalyticsClientBaseTextAnalyticsClient
Constructor
TextAnalyticsClient(endpoint: str, credential: AzureKeyCredential | TokenCredential, *, default_language: str | None = None, default_country_hint: str | None = None, api_version: str | TextAnalyticsApiVersion | None = None, **kwargs: Any)
Parameters
Name | Description |
---|---|
endpoint
Required
|
Supported Cognitive Services or Language resource endpoints (protocol and hostname, for example: 'https://.cognitiveservices.azure.com'). |
credential
Required
|
Credentials needed for the client to connect to Azure. This can be the an instance of AzureKeyCredential if using a Cognitive Services/Language API key or a token credential from identity. |
Keyword-Only Parameters
Name | Description |
---|---|
default_country_hint
|
Sets the default country_hint to use for all operations. Defaults to "US". If you don't want to use a country hint, pass the string "none". |
default_language
|
Sets the default language to use for all operations. Defaults to "en". |
api_version
|
The API version of the service to use for requests. It defaults to the latest service version. Setting to an older version may result in reduced feature compatibility. |
Examples
Creating the TextAnalyticsClient with endpoint and API key.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(endpoint, AzureKeyCredential(key))
Creating the TextAnalyticsClient with endpoint and token credential from Azure Active Directory.
import os
from azure.ai.textanalytics import TextAnalyticsClient
from azure.identity import DefaultAzureCredential
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
credential = DefaultAzureCredential()
text_analytics_client = TextAnalyticsClient(endpoint, credential=credential)
Methods
analyze_sentiment |
Analyze sentiment for a batch of documents. Turn on opinion mining with show_opinion_mining. Returns a sentiment prediction, as well as sentiment scores for each sentiment class (Positive, Negative, and Neutral) for the document and each sentence within it. See https://aka.ms/azsdk/textanalytics/data-limits for service data limits. New in version v3.1: The show_opinion_mining, disable_service_logs, and string_index_type keyword arguments. |
begin_abstract_summary |
Start a long-running abstractive summarization operation. For a conceptual discussion of abstractive summarization, see the service documentation: https://learn.microsoft.com/azure/cognitive-services/language-service/summarization/overview New in version 2023-04-01: The begin_abstract_summary client method. |
begin_analyze_actions |
Start a long-running operation to perform a variety of text analysis actions over a batch of documents. We recommend you use this function if you're looking to analyze larger documents, and / or combine multiple text analysis actions into one call. Otherwise, we recommend you use the action specific endpoints, for example analyze_sentiment. Note See the service documentation for regional support of custom action features: New in version v3.1: The begin_analyze_actions client method. New in version 2022-05-01: The RecognizeCustomEntitiesAction, SingleLabelClassifyAction, MultiLabelClassifyAction, and AnalyzeHealthcareEntitiesAction input options and the corresponding RecognizeCustomEntitiesResult, ClassifyDocumentResult, and AnalyzeHealthcareEntitiesResult result objects New in version 2023-04-01: The ExtractiveSummaryAction and AbstractiveSummaryAction input options and the corresponding ExtractiveSummaryResult and AbstractiveSummaryResult result objects. |
begin_analyze_healthcare_entities |
Analyze healthcare entities and identify relationships between these entities in a batch of documents. Entities are associated with references that can be found in existing knowledge bases, such as UMLS, CHV, MSH, etc. We also extract the relations found between entities, for example in "The subject took 100 mg of ibuprofen", we would extract the relationship between the "100 mg" dosage and the "ibuprofen" medication. New in version v3.1: The begin_analyze_healthcare_entities client method. New in version 2022-05-01: The display_name keyword argument. |
begin_extract_summary |
Start a long-running extractive summarization operation. For a conceptual discussion of extractive summarization, see the service documentation: https://learn.microsoft.com/azure/cognitive-services/language-service/summarization/overview New in version 2023-04-01: The begin_extract_summary client method. |
begin_multi_label_classify |
Start a long-running custom multi label classification operation. For information on regional support of custom features and how to train a model to classify your documents, see https://aka.ms/azsdk/textanalytics/customfunctionalities New in version 2022-05-01: The begin_multi_label_classify client method. |
begin_recognize_custom_entities |
Start a long-running custom named entity recognition operation. For information on regional support of custom features and how to train a model to recognize custom entities, see https://aka.ms/azsdk/textanalytics/customentityrecognition New in version 2022-05-01: The begin_recognize_custom_entities client method. |
begin_single_label_classify |
Start a long-running custom single label classification operation. For information on regional support of custom features and how to train a model to classify your documents, see https://aka.ms/azsdk/textanalytics/customfunctionalities New in version 2022-05-01: The begin_single_label_classify client method. |
close |
Close sockets opened by the client. Calling this method is unnecessary when using the client as a context manager. |
detect_language |
Detect language for a batch of documents. Returns the detected language and a numeric score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. See https://aka.ms/talangs for the list of enabled languages. See https://aka.ms/azsdk/textanalytics/data-limits for service data limits. New in version v3.1: The disable_service_logs keyword argument. |
extract_key_phrases |
Extract key phrases from a batch of documents. Returns a list of strings denoting the key phrases in the input text. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff" See https://aka.ms/azsdk/textanalytics/data-limits for service data limits. New in version v3.1: The disable_service_logs keyword argument. |
recognize_entities |
Recognize entities for a batch of documents. Identifies and categorizes entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. For the list of supported entity types, check: https://aka.ms/taner See https://aka.ms/azsdk/textanalytics/data-limits for service data limits. New in version v3.1: The disable_service_logs and string_index_type keyword arguments. |
recognize_linked_entities |
Recognize linked entities from a well-known knowledge base for a batch of documents. Identifies and disambiguates the identity of each entity found in text (for example, determining whether an occurrence of the word Mars refers to the planet, or to the Roman god of war). Recognized entities are associated with URLs to a well-known knowledge base, like Wikipedia. See https://aka.ms/azsdk/textanalytics/data-limits for service data limits. New in version v3.1: The disable_service_logs and string_index_type keyword arguments. |
recognize_pii_entities |
Recognize entities containing personal information for a batch of documents. Returns a list of personal information entities ("SSN", "Bank Account", etc) in the document. For the list of supported entity types, check https://aka.ms/azsdk/language/pii See https://aka.ms/azsdk/textanalytics/data-limits for service data limits. New in version v3.1: The recognize_pii_entities client method. |
analyze_sentiment
Analyze sentiment for a batch of documents. Turn on opinion mining with show_opinion_mining.
Returns a sentiment prediction, as well as sentiment scores for each sentiment class (Positive, Negative, and Neutral) for the document and each sentence within it.
See https://aka.ms/azsdk/textanalytics/data-limits for service data limits.
New in version v3.1: The show_opinion_mining, disable_service_logs, and string_index_type keyword arguments.
analyze_sentiment(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], *, disable_service_logs: bool | None = None, language: str | None = None, model_version: str | None = None, show_opinion_mining: bool | None = None, show_stats: bool | None = None, string_index_type: str | None = None, **kwargs: Any) -> List[AnalyzeSentimentResult | DocumentError]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
Keyword-Only Parameters
Name | Description |
---|---|
show_opinion_mining
|
Whether to mine the opinions of a sentence and conduct more granular analysis around the aspects of a product or service (also known as aspect-based sentiment analysis). If set to true, the returned SentenceSentiment objects will have property mined_opinions containing the result of this analysis. Only available for API version v3.1 and up. |
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
model_version
|
The model version to use for the analysis, e.g. "latest". If a model version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning |
show_stats
|
If set to true, response will contain document level statistics in the statistics field of the document-level response. |
string_index_type
|
Specifies the method used to interpret string offsets. UnicodeCodePoint, the Python encoding, is the default. To override the Python default, you can also pass in Utf16CodeUnit or TextElement_v8. For additional information see https://aka.ms/text-analytics-offsets |
disable_service_logs
|
If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
Returns
Type | Description |
---|---|
The combined list of AnalyzeSentimentResult and DocumentError in the order the original documents were passed in. |
Exceptions
Type | Description |
---|---|
Examples
Analyze sentiment in a batch of documents.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
"""I had the best day of my life. I decided to go sky-diving and it made me appreciate my whole life so much more.
I developed a deep-connection with my instructor as well, and I feel as if I've made a life-long friend in her.""",
"""This was a waste of my time. All of the views on this drop are extremely boring, all I saw was grass. 0/10 would
not recommend to any divers, even first timers.""",
"""This was pretty good! The sights were ok, and I had fun with my instructors! Can't complain too much about my experience""",
"""I only have one word for my experience: WOW!!! I can't believe I have had such a wonderful skydiving company right
in my backyard this whole time! I will definitely be a repeat customer, and I want to take my grandmother skydiving too,
I know she'll love it!"""
]
result = text_analytics_client.analyze_sentiment(documents, show_opinion_mining=True)
docs = [doc for doc in result if not doc.is_error]
print("Let's visualize the sentiment of each of these documents")
for idx, doc in enumerate(docs):
print(f"Document text: {documents[idx]}")
print(f"Overall sentiment: {doc.sentiment}")
begin_abstract_summary
Start a long-running abstractive summarization operation.
For a conceptual discussion of abstractive summarization, see the service documentation: https://learn.microsoft.com/azure/cognitive-services/language-service/summarization/overview
New in version 2023-04-01: The begin_abstract_summary client method.
begin_abstract_summary(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], *, continuation_token: str | None = None, disable_service_logs: bool | None = None, display_name: str | None = None, language: str | None = None, polling_interval: int | None = None, show_stats: bool | None = None, model_version: str | None = None, string_index_type: str | None = None, sentence_count: int | None = None, **kwargs: Any) -> TextAnalysisLROPoller[ItemPaged[AbstractiveSummaryResult | DocumentError]]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
Keyword-Only Parameters
Name | Description |
---|---|
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
show_stats
|
If set to true, response will contain document level statistics. |
sentence_count
|
It controls the approximate number of sentences in the output summaries. |
model_version
|
The model version to use for the analysis, e.g. "latest". If a model version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning |
string_index_type
|
Specifies the method used to interpret string offsets. |
disable_service_logs
|
If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
polling_interval
|
Waiting time between two polls for LRO operations if no Retry-After header is present. Defaults to 5 seconds. |
continuation_token
|
Call continuation_token() on the poller object to save the long-running operation (LRO) state into an opaque token. Pass the value as the continuation_token keyword argument to restart the LRO from a saved state. |
display_name
|
An optional display name to set for the requested analysis. |
Returns
Type | Description |
---|---|
An instance of an TextAnalysisLROPoller. Call result() on the this object to return a heterogeneous pageable of AbstractiveSummaryResult and DocumentError. |
Exceptions
Type | Description |
---|---|
Examples
Perform abstractive summarization on a batch of documents.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(
endpoint=endpoint,
credential=AzureKeyCredential(key),
)
document = [
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, "
"human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive "
"Services, I have been working with a team of amazing scientists and engineers to turn this quest into a "
"reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of "
"human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the "
"intersection of all three, there's magic-what we call XYZ-code as illustrated in Figure 1-a joint "
"representation to create more powerful AI that can speak, hear, see, and understand humans better. "
"We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, "
"spanning modalities and languages. The goal is to have pretrained models that can jointly learn "
"representations to support a broad range of downstream AI tasks, much in the way humans do today. "
"Over the past five years, we have achieved human performance on benchmarks in conversational speech "
"recognition, machine translation, conversational question answering, machine reading comprehension, "
"and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious "
"aspiration to produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
"is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational "
"component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."
]
poller = text_analytics_client.begin_abstract_summary(document)
abstract_summary_results = poller.result()
for result in abstract_summary_results:
if result.kind == "AbstractiveSummarization":
print("Summaries abstracted:")
[print(f"{summary.text}\n") for summary in result.summaries]
elif result.is_error is True:
print("...Is an error with code '{}' and message '{}'".format(
result.error.code, result.error.message
))
begin_analyze_actions
Start a long-running operation to perform a variety of text analysis actions over a batch of documents.
We recommend you use this function if you're looking to analyze larger documents, and / or combine multiple text analysis actions into one call. Otherwise, we recommend you use the action specific endpoints, for example analyze_sentiment.
Note
See the service documentation for regional support of custom action features:
New in version v3.1: The begin_analyze_actions client method.
New in version 2022-05-01: The RecognizeCustomEntitiesAction, SingleLabelClassifyAction, MultiLabelClassifyAction, and AnalyzeHealthcareEntitiesAction input options and the corresponding RecognizeCustomEntitiesResult, ClassifyDocumentResult, and AnalyzeHealthcareEntitiesResult result objects
New in version 2023-04-01: The ExtractiveSummaryAction and AbstractiveSummaryAction input options and the corresponding ExtractiveSummaryResult and AbstractiveSummaryResult result objects.
begin_analyze_actions(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], actions: List[RecognizeEntitiesAction | RecognizeLinkedEntitiesAction | RecognizePiiEntitiesAction | ExtractKeyPhrasesAction | AnalyzeSentimentAction | RecognizeCustomEntitiesAction | SingleLabelClassifyAction | MultiLabelClassifyAction | AnalyzeHealthcareEntitiesAction | ExtractiveSummaryAction | AbstractiveSummaryAction], *, continuation_token: str | None = None, display_name: str | None = None, language: str | None = None, polling_interval: int | None = None, show_stats: bool | None = None, **kwargs: Any) -> TextAnalysisLROPoller[ItemPaged[List[RecognizeEntitiesResult | RecognizeLinkedEntitiesResult | RecognizePiiEntitiesResult | ExtractKeyPhrasesResult | AnalyzeSentimentResult | RecognizeCustomEntitiesResult | ClassifyDocumentResult | AnalyzeHealthcareEntitiesResult | ExtractiveSummaryResult | AbstractiveSummaryResult | DocumentError]]]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
actions
Required
|
list[RecognizeEntitiesAction or
RecognizePiiEntitiesAction or
ExtractKeyPhrasesAction or
RecognizeLinkedEntitiesAction or
AnalyzeSentimentAction or
RecognizeCustomEntitiesAction or
SingleLabelClassifyAction or
MultiLabelClassifyAction or
AnalyzeHealthcareEntitiesAction or
ExtractiveSummaryAction or
AbstractiveSummaryAction]
A heterogeneous list of actions to perform on the input documents. Each action object encapsulates the parameters used for the particular action type. The action results will be in the same order of the input actions. |
Keyword-Only Parameters
Name | Description |
---|---|
display_name
|
An optional display name to set for the requested analysis. |
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
show_stats
|
If set to true, response will contain document level statistics. |
polling_interval
|
Waiting time between two polls for LRO operations if no Retry-After header is present. Defaults to 5 seconds. |
continuation_token
|
Call continuation_token() on the poller object to save the long-running operation (LRO) state into an opaque token. Pass the value as the continuation_token keyword argument to restart the LRO from a saved state. |
Returns
Type | Description |
---|---|
An instance of an TextAnalysisLROPoller. Call result() on the poller object to return a pageable heterogeneous list of lists. This list of lists is first ordered by the documents you input, then ordered by the actions you input. For example, if you have documents input ["Hello", "world"], and actions RecognizeEntitiesAction and AnalyzeSentimentAction, when iterating over the list of lists, you will first iterate over the action results for the "Hello" document, getting the RecognizeEntitiesResult of "Hello", then the AnalyzeSentimentResult of "Hello". Then, you will get the RecognizeEntitiesResult and AnalyzeSentimentResult of "world". |
Exceptions
Type | Description |
---|---|
Examples
Start a long-running operation to perform a variety of text analysis actions over a batch of documents.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import (
TextAnalyticsClient,
RecognizeEntitiesAction,
RecognizeLinkedEntitiesAction,
RecognizePiiEntitiesAction,
ExtractKeyPhrasesAction,
AnalyzeSentimentAction,
)
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(
endpoint=endpoint,
credential=AzureKeyCredential(key),
)
documents = [
'We went to Contoso Steakhouse located at midtown NYC last week for a dinner party, and we adore the spot! '
'They provide marvelous food and they have a great menu. The chief cook happens to be the owner (I think his name is John Doe) '
'and he is super nice, coming out of the kitchen and greeted us all.'
,
'We enjoyed very much dining in the place! '
'The Sirloin steak I ordered was tender and juicy, and the place was impeccably clean. You can even pre-order from their '
'online menu at www.contososteakhouse.com, call 312-555-0176 or send email to order@contososteakhouse.com! '
'The only complaint I have is the food didn\'t come fast enough. Overall I highly recommend it!'
]
poller = text_analytics_client.begin_analyze_actions(
documents,
display_name="Sample Text Analysis",
actions=[
RecognizeEntitiesAction(),
RecognizePiiEntitiesAction(),
ExtractKeyPhrasesAction(),
RecognizeLinkedEntitiesAction(),
AnalyzeSentimentAction(),
],
)
document_results = poller.result()
for doc, action_results in zip(documents, document_results):
print(f"\nDocument text: {doc}")
for result in action_results:
if result.kind == "EntityRecognition":
print("...Results of Recognize Entities Action:")
for entity in result.entities:
print(f"......Entity: {entity.text}")
print(f".........Category: {entity.category}")
print(f".........Confidence Score: {entity.confidence_score}")
print(f".........Offset: {entity.offset}")
elif result.kind == "PiiEntityRecognition":
print("...Results of Recognize PII Entities action:")
for pii_entity in result.entities:
print(f"......Entity: {pii_entity.text}")
print(f".........Category: {pii_entity.category}")
print(f".........Confidence Score: {pii_entity.confidence_score}")
elif result.kind == "KeyPhraseExtraction":
print("...Results of Extract Key Phrases action:")
print(f"......Key Phrases: {result.key_phrases}")
elif result.kind == "EntityLinking":
print("...Results of Recognize Linked Entities action:")
for linked_entity in result.entities:
print(f"......Entity name: {linked_entity.name}")
print(f".........Data source: {linked_entity.data_source}")
print(f".........Data source language: {linked_entity.language}")
print(
f".........Data source entity ID: {linked_entity.data_source_entity_id}"
)
print(f".........Data source URL: {linked_entity.url}")
print(".........Document matches:")
for match in linked_entity.matches:
print(f"............Match text: {match.text}")
print(f"............Confidence Score: {match.confidence_score}")
print(f"............Offset: {match.offset}")
print(f"............Length: {match.length}")
elif result.kind == "SentimentAnalysis":
print("...Results of Analyze Sentiment action:")
print(f"......Overall sentiment: {result.sentiment}")
print(
f"......Scores: positive={result.confidence_scores.positive}; \
neutral={result.confidence_scores.neutral}; \
negative={result.confidence_scores.negative} \n"
)
elif result.is_error is True:
print(
f"...Is an error with code '{result.error.code}' and message '{result.error.message}'"
)
print("------------------------------------------")
begin_analyze_healthcare_entities
Analyze healthcare entities and identify relationships between these entities in a batch of documents.
Entities are associated with references that can be found in existing knowledge bases, such as UMLS, CHV, MSH, etc.
We also extract the relations found between entities, for example in "The subject took 100 mg of ibuprofen", we would extract the relationship between the "100 mg" dosage and the "ibuprofen" medication.
New in version v3.1: The begin_analyze_healthcare_entities client method.
New in version 2022-05-01: The display_name keyword argument.
begin_analyze_healthcare_entities(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], *, continuation_token: str | None = None, disable_service_logs: bool | None = None, display_name: str | None = None, language: str | None = None, model_version: str | None = None, polling_interval: int | None = None, show_stats: bool | None = None, string_index_type: str | None = None, **kwargs: Any) -> AnalyzeHealthcareEntitiesLROPoller[ItemPaged[AnalyzeHealthcareEntitiesResult | DocumentError]]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
Keyword-Only Parameters
Name | Description |
---|---|
model_version
|
The model version to use for the analysis, e.g. "latest". If a model version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning |
show_stats
|
If set to true, response will contain document level statistics. |
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
display_name
|
An optional display name to set for the requested analysis. |
string_index_type
|
Specifies the method used to interpret string offsets. UnicodeCodePoint, the Python encoding, is the default. To override the Python default, you can also pass in Utf16CodeUnit or TextElement_v8. For additional information see https://aka.ms/text-analytics-offsets |
polling_interval
|
Waiting time between two polls for LRO operations if no Retry-After header is present. Defaults to 5 seconds. |
continuation_token
|
Call continuation_token() on the poller object to save the long-running operation (LRO) state into an opaque token. Pass the value as the continuation_token keyword argument to restart the LRO from a saved state. |
disable_service_logs
|
Defaults to true, meaning that the Language service will not log your input text on the service side for troubleshooting. If set to False, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
Returns
Type | Description |
---|---|
An instance of an AnalyzeHealthcareEntitiesLROPoller. Call result() on the this object to return a heterogeneous pageable of AnalyzeHealthcareEntitiesResult and DocumentError. |
Exceptions
Type | Description |
---|---|
Examples
Recognize healthcare entities in a batch of documents.
import os
import typing
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient, HealthcareEntityRelation
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(
endpoint=endpoint,
credential=AzureKeyCredential(key),
)
documents = [
"""
Patient needs to take 100 mg of ibuprofen, and 3 mg of potassium. Also needs to take
10 mg of Zocor.
""",
"""
Patient needs to take 50 mg of ibuprofen, and 2 mg of Coumadin.
"""
]
poller = text_analytics_client.begin_analyze_healthcare_entities(documents)
result = poller.result()
docs = [doc for doc in result if not doc.is_error]
print("Let's first visualize the outputted healthcare result:")
for doc in docs:
for entity in doc.entities:
print(f"Entity: {entity.text}")
print(f"...Normalized Text: {entity.normalized_text}")
print(f"...Category: {entity.category}")
print(f"...Subcategory: {entity.subcategory}")
print(f"...Offset: {entity.offset}")
print(f"...Confidence score: {entity.confidence_score}")
if entity.data_sources is not None:
print("...Data Sources:")
for data_source in entity.data_sources:
print(f"......Entity ID: {data_source.entity_id}")
print(f"......Name: {data_source.name}")
if entity.assertion is not None:
print("...Assertion:")
print(f"......Conditionality: {entity.assertion.conditionality}")
print(f"......Certainty: {entity.assertion.certainty}")
print(f"......Association: {entity.assertion.association}")
for relation in doc.entity_relations:
print(f"Relation of type: {relation.relation_type} has the following roles")
for role in relation.roles:
print(f"...Role '{role.name}' with entity '{role.entity.text}'")
print("------------------------------------------")
print("Now, let's get all of medication dosage relations from the documents")
dosage_of_medication_relations = [
entity_relation
for doc in docs
for entity_relation in doc.entity_relations if entity_relation.relation_type == HealthcareEntityRelation.DOSAGE_OF_MEDICATION
]
begin_extract_summary
Start a long-running extractive summarization operation.
For a conceptual discussion of extractive summarization, see the service documentation: https://learn.microsoft.com/azure/cognitive-services/language-service/summarization/overview
New in version 2023-04-01: The begin_extract_summary client method.
begin_extract_summary(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], *, continuation_token: str | None = None, disable_service_logs: bool | None = None, display_name: str | None = None, language: str | None = None, polling_interval: int | None = None, show_stats: bool | None = None, model_version: str | None = None, string_index_type: str | None = None, max_sentence_count: int | None = None, order_by: Literal['Rank', 'Offset'] | None = None, **kwargs: Any) -> TextAnalysisLROPoller[ItemPaged[ExtractiveSummaryResult | DocumentError]]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
Keyword-Only Parameters
Name | Description |
---|---|
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
show_stats
|
If set to true, response will contain document level statistics. |
max_sentence_count
|
Maximum number of sentences to return. Defaults to 3. |
order_by
|
Possible values include: "Offset", "Rank". Default value: "Offset". |
model_version
|
The model version to use for the analysis, e.g. "latest". If a model version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning |
string_index_type
|
Specifies the method used to interpret string offsets. |
disable_service_logs
|
If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
polling_interval
|
Waiting time between two polls for LRO operations if no Retry-After header is present. Defaults to 5 seconds. |
continuation_token
|
Call continuation_token() on the poller object to save the long-running operation (LRO) state into an opaque token. Pass the value as the continuation_token keyword argument to restart the LRO from a saved state. |
display_name
|
An optional display name to set for the requested analysis. |
Returns
Type | Description |
---|---|
An instance of an TextAnalysisLROPoller. Call result() on the this object to return a heterogeneous pageable of ExtractiveSummaryResult and DocumentError. |
Exceptions
Type | Description |
---|---|
Examples
Perform extractive summarization on a batch of documents.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(
endpoint=endpoint,
credential=AzureKeyCredential(key),
)
document = [
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic, "
"human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI Cognitive "
"Services, I have been working with a team of amazing scientists and engineers to turn this quest into a "
"reality. In my role, I enjoy a unique perspective in viewing the relationship among three attributes of "
"human cognition: monolingual text (X), audio or visual sensory signals, (Y) and multilingual (Z). At the "
"intersection of all three, there's magic-what we call XYZ-code as illustrated in Figure 1-a joint "
"representation to create more powerful AI that can speak, hear, see, and understand humans better. "
"We believe XYZ-code will enable us to fulfill our long-term vision: cross-domain transfer learning, "
"spanning modalities and languages. The goal is to have pretrained models that can jointly learn "
"representations to support a broad range of downstream AI tasks, much in the way humans do today. "
"Over the past five years, we have achieved human performance on benchmarks in conversational speech "
"recognition, machine translation, conversational question answering, machine reading comprehension, "
"and image captioning. These five breakthroughs provided us with strong signals toward our more ambitious "
"aspiration to produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
"is closer in line with how humans learn and understand. I believe the joint XYZ-code is a foundational "
"component of this aspiration, if grounded with external knowledge sources in the downstream AI tasks."
]
poller = text_analytics_client.begin_extract_summary(document)
extract_summary_results = poller.result()
for result in extract_summary_results:
if result.kind == "ExtractiveSummarization":
print("Summary extracted: \n{}".format(
" ".join([sentence.text for sentence in result.sentences]))
)
elif result.is_error is True:
print("...Is an error with code '{}' and message '{}'".format(
result.error.code, result.error.message
))
begin_multi_label_classify
Start a long-running custom multi label classification operation.
For information on regional support of custom features and how to train a model to classify your documents, see https://aka.ms/azsdk/textanalytics/customfunctionalities
New in version 2022-05-01: The begin_multi_label_classify client method.
begin_multi_label_classify(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], project_name: str, deployment_name: str, *, continuation_token: str | None = None, disable_service_logs: bool | None = None, display_name: str | None = None, language: str | None = None, polling_interval: int | None = None, show_stats: bool | None = None, **kwargs: Any) -> TextAnalysisLROPoller[ItemPaged[ClassifyDocumentResult | DocumentError]]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
project_name
Required
|
Required. This field indicates the project name for the model. |
deployment_name
Required
|
This field indicates the deployment name for the model. |
Keyword-Only Parameters
Name | Description |
---|---|
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
show_stats
|
If set to true, response will contain document level statistics. |
disable_service_logs
|
If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
polling_interval
|
Waiting time between two polls for LRO operations if no Retry-After header is present. Defaults to 5 seconds. |
continuation_token
|
Call continuation_token() on the poller object to save the long-running operation (LRO) state into an opaque token. Pass the value as the continuation_token keyword argument to restart the LRO from a saved state. |
display_name
|
An optional display name to set for the requested analysis. |
Returns
Type | Description |
---|---|
An instance of an TextAnalysisLROPoller. Call result() on the this object to return a heterogeneous pageable of ClassifyDocumentResult and DocumentError. |
Exceptions
Type | Description |
---|---|
Examples
Perform multi label classification on a batch of documents.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
project_name = os.environ["MULTI_LABEL_CLASSIFY_PROJECT_NAME"]
deployment_name = os.environ["MULTI_LABEL_CLASSIFY_DEPLOYMENT_NAME"]
path_to_sample_document = os.path.abspath(
os.path.join(
os.path.abspath(__file__),
"..",
"./text_samples/custom_classify_sample.txt",
)
)
text_analytics_client = TextAnalyticsClient(
endpoint=endpoint,
credential=AzureKeyCredential(key),
)
with open(path_to_sample_document) as fd:
document = [fd.read()]
poller = text_analytics_client.begin_multi_label_classify(
document,
project_name=project_name,
deployment_name=deployment_name
)
document_results = poller.result()
for doc, classification_result in zip(document, document_results):
if classification_result.kind == "CustomDocumentClassification":
classifications = classification_result.classifications
print(f"\nThe movie plot '{doc}' was classified as the following genres:\n")
for classification in classifications:
print("'{}' with confidence score {}.".format(
classification.category, classification.confidence_score
))
elif classification_result.is_error is True:
print("Movie plot '{}' has an error with code '{}' and message '{}'".format(
doc, classification_result.error.code, classification_result.error.message
))
begin_recognize_custom_entities
Start a long-running custom named entity recognition operation.
For information on regional support of custom features and how to train a model to recognize custom entities, see https://aka.ms/azsdk/textanalytics/customentityrecognition
New in version 2022-05-01: The begin_recognize_custom_entities client method.
begin_recognize_custom_entities(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], project_name: str, deployment_name: str, *, continuation_token: str | None = None, disable_service_logs: bool | None = None, display_name: str | None = None, language: str | None = None, polling_interval: int | None = None, show_stats: bool | None = None, string_index_type: str | None = None, **kwargs: Any) -> TextAnalysisLROPoller[ItemPaged[RecognizeCustomEntitiesResult | DocumentError]]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
project_name
Required
|
Required. This field indicates the project name for the model. |
deployment_name
Required
|
This field indicates the deployment name for the model. |
Keyword-Only Parameters
Name | Description |
---|---|
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
show_stats
|
If set to true, response will contain document level statistics. |
disable_service_logs
|
If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
string_index_type
|
Specifies the method used to interpret string offsets. UnicodeCodePoint, the Python encoding, is the default. To override the Python default, you can also pass in Utf16CodeUnit or TextElement_v8. For additional information see https://aka.ms/text-analytics-offsets |
polling_interval
|
Waiting time between two polls for LRO operations if no Retry-After header is present. Defaults to 5 seconds. |
continuation_token
|
Call continuation_token() on the poller object to save the long-running operation (LRO) state into an opaque token. Pass the value as the continuation_token keyword argument to restart the LRO from a saved state. |
display_name
|
An optional display name to set for the requested analysis. |
Returns
Type | Description |
---|---|
An instance of an TextAnalysisLROPoller. Call result() on the this object to return a heterogeneous pageable of RecognizeCustomEntitiesResult and DocumentError. |
Exceptions
Type | Description |
---|---|
Examples
Recognize custom entities in a batch of documents.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
project_name = os.environ["CUSTOM_ENTITIES_PROJECT_NAME"]
deployment_name = os.environ["CUSTOM_ENTITIES_DEPLOYMENT_NAME"]
path_to_sample_document = os.path.abspath(
os.path.join(
os.path.abspath(__file__),
"..",
"./text_samples/custom_entities_sample.txt",
)
)
text_analytics_client = TextAnalyticsClient(
endpoint=endpoint,
credential=AzureKeyCredential(key),
)
with open(path_to_sample_document) as fd:
document = [fd.read()]
poller = text_analytics_client.begin_recognize_custom_entities(
document,
project_name=project_name,
deployment_name=deployment_name
)
document_results = poller.result()
for custom_entities_result in document_results:
if custom_entities_result.kind == "CustomEntityRecognition":
for entity in custom_entities_result.entities:
print(
"Entity '{}' has category '{}' with confidence score of '{}'".format(
entity.text, entity.category, entity.confidence_score
)
)
elif custom_entities_result.is_error is True:
print("...Is an error with code '{}' and message '{}'".format(
custom_entities_result.error.code, custom_entities_result.error.message
)
)
begin_single_label_classify
Start a long-running custom single label classification operation.
For information on regional support of custom features and how to train a model to classify your documents, see https://aka.ms/azsdk/textanalytics/customfunctionalities
New in version 2022-05-01: The begin_single_label_classify client method.
begin_single_label_classify(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], project_name: str, deployment_name: str, *, continuation_token: str | None = None, disable_service_logs: bool | None = None, display_name: str | None = None, language: str | None = None, polling_interval: int | None = None, show_stats: bool | None = None, **kwargs: Any) -> TextAnalysisLROPoller[ItemPaged[ClassifyDocumentResult | DocumentError]]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
project_name
Required
|
Required. This field indicates the project name for the model. |
deployment_name
Required
|
This field indicates the deployment name for the model. |
Keyword-Only Parameters
Name | Description |
---|---|
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
show_stats
|
If set to true, response will contain document level statistics. |
disable_service_logs
|
If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
polling_interval
|
Waiting time between two polls for LRO operations if no Retry-After header is present. Defaults to 5 seconds. |
continuation_token
|
Call continuation_token() on the poller object to save the long-running operation (LRO) state into an opaque token. Pass the value as the continuation_token keyword argument to restart the LRO from a saved state. |
display_name
|
An optional display name to set for the requested analysis. |
Returns
Type | Description |
---|---|
An instance of an TextAnalysisLROPoller. Call result() on the this object to return a heterogeneous pageable of ClassifyDocumentResult and DocumentError. |
Exceptions
Type | Description |
---|---|
Examples
Perform single label classification on a batch of documents.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
project_name = os.environ["SINGLE_LABEL_CLASSIFY_PROJECT_NAME"]
deployment_name = os.environ["SINGLE_LABEL_CLASSIFY_DEPLOYMENT_NAME"]
path_to_sample_document = os.path.abspath(
os.path.join(
os.path.abspath(__file__),
"..",
"./text_samples/custom_classify_sample.txt",
)
)
text_analytics_client = TextAnalyticsClient(
endpoint=endpoint,
credential=AzureKeyCredential(key),
)
with open(path_to_sample_document) as fd:
document = [fd.read()]
poller = text_analytics_client.begin_single_label_classify(
document,
project_name=project_name,
deployment_name=deployment_name
)
document_results = poller.result()
for doc, classification_result in zip(document, document_results):
if classification_result.kind == "CustomDocumentClassification":
classification = classification_result.classifications[0]
print("The document text '{}' was classified as '{}' with confidence score {}.".format(
doc, classification.category, classification.confidence_score)
)
elif classification_result.is_error is True:
print("Document text '{}' has an error with code '{}' and message '{}'".format(
doc, classification_result.error.code, classification_result.error.message
))
close
Close sockets opened by the client. Calling this method is unnecessary when using the client as a context manager.
close() -> None
Keyword-Only Parameters
Name | Description |
---|---|
show_opinion_mining
|
Whether to mine the opinions of a sentence and conduct more granular analysis around the aspects of a product or service (also known as aspect-based sentiment analysis). If set to true, the returned SentenceSentiment objects will have property mined_opinions containing the result of this analysis. Only available for API version v3.1 and up. |
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
model_version
|
The model version to use for the analysis, e.g. "latest". If a model version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning |
show_stats
|
If set to true, response will contain document level statistics in the statistics field of the document-level response. |
string_index_type
|
Specifies the method used to interpret string offsets. UnicodeCodePoint, the Python encoding, is the default. To override the Python default, you can also pass in Utf16CodeUnit or TextElement_v8. For additional information see https://aka.ms/text-analytics-offsets |
disable_service_logs
|
If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
Exceptions
Type | Description |
---|---|
detect_language
Detect language for a batch of documents.
Returns the detected language and a numeric score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. See https://aka.ms/talangs for the list of enabled languages.
See https://aka.ms/azsdk/textanalytics/data-limits for service data limits.
New in version v3.1: The disable_service_logs keyword argument.
detect_language(documents: List[str] | List[DetectLanguageInput] | List[Dict[str, str]], *, country_hint: str | None = None, disable_service_logs: bool | None = None, model_version: str | None = None, show_stats: bool | None = None, **kwargs: Any) -> List[DetectLanguageResult | DocumentError]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and country_hint on a per-item basis you must use as input a list[DetectLanguageInput] or a list of dict representations of DetectLanguageInput, like {"id": "1", "country_hint": "us", "text": "hello world"}. |
Keyword-Only Parameters
Name | Description |
---|---|
country_hint
|
Country of origin hint for the entire batch. Accepts two letter country codes specified by ISO 3166-1 alpha-2. Per-document country hints will take precedence over whole batch hints. Defaults to "US". If you don't want to use a country hint, pass the string "none". |
model_version
|
The model version to use for the analysis, e.g. "latest". If a model version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning |
show_stats
|
If set to true, response will contain document level statistics in the statistics field of the document-level response. |
disable_service_logs
|
If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
Returns
Type | Description |
---|---|
The combined list of DetectLanguageResult and DocumentError in the order the original documents were passed in. |
Exceptions
Type | Description |
---|---|
Examples
Detecting language in a batch of documents.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
"""
The concierge Paulette was extremely helpful. Sadly when we arrived the elevator was broken, but with Paulette's help we barely noticed this inconvenience.
She arranged for our baggage to be brought up to our room with no extra charge and gave us a free meal to refurbish all of the calories we lost from
walking up the stairs :). Can't say enough good things about my experience!
""",
"""
最近由于工作压力太大,我们决定去富酒店度假。那儿的温泉实在太舒服了,我跟我丈夫都完全恢复了工作前的青春精神!加油!
"""
]
result = text_analytics_client.detect_language(documents)
reviewed_docs = [doc for doc in result if not doc.is_error]
print("Let's see what language each review is in!")
for idx, doc in enumerate(reviewed_docs):
print("Review #{} is in '{}', which has ISO639-1 name '{}'\n".format(
idx, doc.primary_language.name, doc.primary_language.iso6391_name
))
extract_key_phrases
Extract key phrases from a batch of documents.
Returns a list of strings denoting the key phrases in the input text. For example, for the input text "The food was delicious and there were wonderful staff", the API returns the main talking points: "food" and "wonderful staff"
See https://aka.ms/azsdk/textanalytics/data-limits for service data limits.
New in version v3.1: The disable_service_logs keyword argument.
extract_key_phrases(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], *, disable_service_logs: bool | None = None, language: str | None = None, model_version: str | None = None, show_stats: bool | None = None, **kwargs: Any) -> List[ExtractKeyPhrasesResult | DocumentError]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
Keyword-Only Parameters
Name | Description |
---|---|
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
model_version
|
The model version to use for the analysis, e.g. "latest". If a model version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning |
show_stats
|
If set to true, response will contain document level statistics in the statistics field of the document-level response. |
disable_service_logs
|
If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
Returns
Type | Description |
---|---|
The combined list of ExtractKeyPhrasesResult and DocumentError in the order the original documents were passed in. |
Exceptions
Type | Description |
---|---|
Examples
Extract the key phrases in a batch of documents.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
articles = [
"""
Washington, D.C. Autumn in DC is a uniquely beautiful season. The leaves fall from the trees
in a city chock-full of forests, leaving yellow leaves on the ground and a clearer view of the
blue sky above...
""",
"""
Redmond, WA. In the past few days, Microsoft has decided to further postpone the start date of
its United States workers, due to the pandemic that rages with no end in sight...
""",
"""
Redmond, WA. Employees at Microsoft can be excited about the new coffee shop that will open on campus
once workers no longer have to work remotely...
"""
]
result = text_analytics_client.extract_key_phrases(articles)
for idx, doc in enumerate(result):
if not doc.is_error:
print("Key phrases in article #{}: {}".format(
idx + 1,
", ".join(doc.key_phrases)
))
recognize_entities
Recognize entities for a batch of documents.
Identifies and categorizes entities in your text as people, places, organizations, date/time, quantities, percentages, currencies, and more. For the list of supported entity types, check: https://aka.ms/taner
See https://aka.ms/azsdk/textanalytics/data-limits for service data limits.
New in version v3.1: The disable_service_logs and string_index_type keyword arguments.
recognize_entities(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], *, disable_service_logs: bool | None = None, language: str | None = None, model_version: str | None = None, show_stats: bool | None = None, string_index_type: str | None = None, **kwargs: Any) -> List[RecognizeEntitiesResult | DocumentError]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
Keyword-Only Parameters
Name | Description |
---|---|
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
model_version
|
The model version to use for the analysis, e.g. "latest". If a model version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning |
show_stats
|
If set to true, response will contain document level statistics in the statistics field of the document-level response. |
string_index_type
|
Specifies the method used to interpret string offsets. UnicodeCodePoint, the Python encoding, is the default. To override the Python default, you can also pass in Utf16CodeUnit or TextElement_v8. For additional information see https://aka.ms/text-analytics-offsets |
disable_service_logs
|
If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
Returns
Type | Description |
---|---|
The combined list of RecognizeEntitiesResult and DocumentError in the order the original documents were passed in. |
Exceptions
Type | Description |
---|---|
Examples
Recognize entities in a batch of documents.
import os
import typing
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
reviews = [
"""I work for Foo Company, and we hired Contoso for our annual founding ceremony. The food
was amazing and we all can't say enough good words about the quality and the level of service.""",
"""We at the Foo Company re-hired Contoso after all of our past successes with the company.
Though the food was still great, I feel there has been a quality drop since their last time
catering for us. Is anyone else running into the same problem?""",
"""Bar Company is over the moon about the service we received from Contoso, the best sliders ever!!!!"""
]
result = text_analytics_client.recognize_entities(reviews)
result = [review for review in result if not review.is_error]
organization_to_reviews: typing.Dict[str, typing.List[str]] = {}
for idx, review in enumerate(result):
for entity in review.entities:
print(f"Entity '{entity.text}' has category '{entity.category}'")
if entity.category == 'Organization':
organization_to_reviews.setdefault(entity.text, [])
organization_to_reviews[entity.text].append(reviews[idx])
for organization, reviews in organization_to_reviews.items():
print(
"\n\nOrganization '{}' has left us the following review(s): {}".format(
organization, "\n\n".join(reviews)
)
)
recognize_linked_entities
Recognize linked entities from a well-known knowledge base for a batch of documents.
Identifies and disambiguates the identity of each entity found in text (for example, determining whether an occurrence of the word Mars refers to the planet, or to the Roman god of war). Recognized entities are associated with URLs to a well-known knowledge base, like Wikipedia.
See https://aka.ms/azsdk/textanalytics/data-limits for service data limits.
New in version v3.1: The disable_service_logs and string_index_type keyword arguments.
recognize_linked_entities(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], *, disable_service_logs: bool | None = None, language: str | None = None, model_version: str | None = None, show_stats: bool | None = None, string_index_type: str | None = None, **kwargs: Any) -> List[RecognizeLinkedEntitiesResult | DocumentError]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
Keyword-Only Parameters
Name | Description |
---|---|
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
model_version
|
The model version to use for the analysis, e.g. "latest". If a model version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning |
show_stats
|
If set to true, response will contain document level statistics in the statistics field of the document-level response. |
string_index_type
|
Specifies the method used to interpret string offsets. UnicodeCodePoint, the Python encoding, is the default. To override the Python default, you can also pass in Utf16CodeUnit or TextElement_v8. For additional information see https://aka.ms/text-analytics-offsets |
disable_service_logs
|
If set to true, you opt-out of having your text input logged on the service side for troubleshooting. By default, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Setting this parameter to true, disables input logging and may limit our ability to remediate issues that occur. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
Returns
Type | Description |
---|---|
The combined list of RecognizeLinkedEntitiesResult and DocumentError in the order the original documents were passed in. |
Exceptions
Type | Description |
---|---|
Examples
Recognize linked entities in a batch of documents.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(endpoint=endpoint, credential=AzureKeyCredential(key))
documents = [
"""
Microsoft was founded by Bill Gates with some friends he met at Harvard. One of his friends,
Steve Ballmer, eventually became CEO after Bill Gates as well. Steve Ballmer eventually stepped
down as CEO of Microsoft, and was succeeded by Satya Nadella.
Microsoft originally moved its headquarters to Bellevue, Washington in January 1979, but is now
headquartered in Redmond.
"""
]
result = text_analytics_client.recognize_linked_entities(documents)
docs = [doc for doc in result if not doc.is_error]
print(
"Let's map each entity to it's Wikipedia article. I also want to see how many times each "
"entity is mentioned in a document\n\n"
)
entity_to_url = {}
for doc in docs:
for entity in doc.entities:
print("Entity '{}' has been mentioned '{}' time(s)".format(
entity.name, len(entity.matches)
))
if entity.data_source == "Wikipedia":
entity_to_url[entity.name] = entity.url
recognize_pii_entities
Recognize entities containing personal information for a batch of documents.
Returns a list of personal information entities ("SSN", "Bank Account", etc) in the document. For the list of supported entity types, check https://aka.ms/azsdk/language/pii
See https://aka.ms/azsdk/textanalytics/data-limits for service data limits.
New in version v3.1: The recognize_pii_entities client method.
recognize_pii_entities(documents: List[str] | List[TextDocumentInput] | List[Dict[str, str]], *, categories_filter: List[str | PiiEntityCategory] | None = None, disable_service_logs: bool | None = None, domain_filter: str | PiiEntityDomain | None = None, language: str | None = None, model_version: str | None = None, show_stats: bool | None = None, string_index_type: str | None = None, **kwargs: Any) -> List[RecognizePiiEntitiesResult | DocumentError]
Parameters
Name | Description |
---|---|
documents
Required
|
The set of documents to process as part of this batch. If you wish to specify the ID and language on a per-item basis you must use as input a list[TextDocumentInput] or a list of dict representations of TextDocumentInput, like {"id": "1", "language": "en", "text": "hello world"}. |
Keyword-Only Parameters
Name | Description |
---|---|
language
|
The 2 letter ISO 639-1 representation of language for the entire batch. For example, use "en" for English; "es" for Spanish etc. If not set, uses "en" for English as default. Per-document language will take precedence over whole batch language. See https://aka.ms/talangs for supported languages in Language API. |
model_version
|
The model version to use for the analysis, e.g. "latest". If a model version is not specified, the API will default to the latest, non-preview version. See here for more info: https://aka.ms/text-analytics-model-versioning |
show_stats
|
If set to true, response will contain document level statistics in the statistics field of the document-level response. |
domain_filter
|
Filters the response entities to ones only included in the specified domain. I.e., if set to 'phi', will only return entities in the Protected Healthcare Information domain. See https://aka.ms/azsdk/language/pii for more information. |
categories_filter
|
Instead of filtering over all PII entity categories, you can pass in a list of the specific PII entity categories you want to filter out. For example, if you only want to filter out U.S. social security numbers in a document, you can pass in [PiiEntityCategory.US_SOCIAL_SECURITY_NUMBER] for this kwarg. |
string_index_type
|
Specifies the method used to interpret string offsets. UnicodeCodePoint, the Python encoding, is the default. To override the Python default, you can also pass in Utf16CodeUnit or TextElement_v8. For additional information see https://aka.ms/text-analytics-offsets |
disable_service_logs
|
Defaults to true, meaning that the Language service will not log your input text on the service side for troubleshooting. If set to False, the Language service logs your input text for 48 hours, solely to allow for troubleshooting issues in providing you with the service's natural language processing functions. Please see Cognitive Services Compliance and Privacy notes at https://aka.ms/cs-compliance for additional details, and Microsoft Responsible AI principles at https://www.microsoft.com/ai/responsible-ai. |
Returns
Type | Description |
---|---|
The combined list of RecognizePiiEntitiesResult and DocumentError in the order the original documents were passed in. |
Exceptions
Type | Description |
---|---|
Examples
Recognize personally identifiable information entities in a batch of documents.
import os
from azure.core.credentials import AzureKeyCredential
from azure.ai.textanalytics import TextAnalyticsClient
endpoint = os.environ["AZURE_LANGUAGE_ENDPOINT"]
key = os.environ["AZURE_LANGUAGE_KEY"]
text_analytics_client = TextAnalyticsClient(
endpoint=endpoint, credential=AzureKeyCredential(key)
)
documents = [
"""Parker Doe has repaid all of their loans as of 2020-04-25.
Their SSN is 859-98-0987. To contact them, use their phone number
555-555-5555. They are originally from Brazil and have Brazilian CPF number 998.214.865-68"""
]
result = text_analytics_client.recognize_pii_entities(documents)
docs = [doc for doc in result if not doc.is_error]
print(
"Let's compare the original document with the documents after redaction. "
"I also want to comb through all of the entities that got redacted"
)
for idx, doc in enumerate(docs):
print(f"Document text: {documents[idx]}")
print(f"Redacted document text: {doc.redacted_text}")
for entity in doc.entities:
print("...Entity '{}' with category '{}' got redacted".format(
entity.text, entity.category
))
Azure SDK for Python