将 Fabric 中预生成的文本分析与 REST API 和 SynapseML 结合使用(预览版)

重要

此功能目前为预览版

文本分析 是一种 Azure AI 服务,可用于使用自然语言处理(NLP)功能执行文本挖掘和文本分析。

本教程演示如何将 Fabric 中的文本分析与 RESTful API 配合使用以:

  • 检测句子或文档级别的情绪标签。
  • 标识给定文本输入的语言。
  • 从文本中提取关键阶段。
  • 识别文本中的不同实体,并将其分类为预定义类或类型。

先决条件

# Get workload endpoints and access token

from synapse.ml.mlflow import get_mlflow_env_config
import json

mlflow_env_configs = get_mlflow_env_config()
access_token = access_token = mlflow_env_configs.driver_aad_token
prebuilt_AI_base_host = mlflow_env_configs.workload_endpoint + "cognitive/textanalytics/"
print("Workload endpoint for AI service: \n" + prebuilt_AI_base_host)

service_url = prebuilt_AI_base_host + "language/:analyze-text?api-version=2022-05-01"

# Make a RESful request to AI service

post_headers = {
    "Content-Type" : "application/json",
    "Authorization" : "Bearer {}".format(access_token)
}

def printresponse(response):
    print(f"HTTP {response.status_code}")
    if response.status_code == 200:
        try:
            result = response.json()
            print(json.dumps(result, indent=2, ensure_ascii=False))
        except:
            print(f"pasre error {response.content}")
    else:
        print(response.headers)
        print(f"error message: {response.content}")

情绪分析

情绪分析功能提供了一种检测情绪标签(如“负”、“中性”和“积极”)以及句子和文档级别的置信度分数的方法。 此功能还为每个文档及其内部的句子返回介于 0 和 1 之间的置信度分数,评估积极、中立和负面情绪。 请参阅 情绪分析和观点挖掘语言支持,以获取启用的语言列表。

import requests
from pprint import pprint
import uuid

post_body = {
    "kind": "SentimentAnalysis",
    "parameters": {
        "modelVersion": "latest",
        "opinionMining": "True"
    },
    "analysisInput":{
        "documents":[
            {
                "id":"1",
                "language":"en",
                "text": "The food and service were unacceptable. The concierge was nice, however."
            }
        ]
    }
} 

post_headers["x-ms-workload-resource-moniker"] = str(uuid.uuid1())
response = requests.post(service_url, json=post_body, headers=post_headers)

# Output all information of the request process
printresponse(response)

输出

    HTTP 200
    {
      "kind": "SentimentAnalysisResults",
      "results": {
        "documents": [
          {
            "id": "1",
            "sentiment": "mixed",
            "confidenceScores": {
              "positive": 0.43,
              "neutral": 0.04,
              "negative": 0.53
            },
            "sentences": [
              {
                "sentiment": "negative",
                "confidenceScores": {
                  "positive": 0.0,
                  "neutral": 0.01,
                  "negative": 0.99
                },
                "offset": 0,
                "length": 40,
                "text": "The food and service were unacceptable. ",
                "targets": [
                  {
                    "sentiment": "negative",
                    "confidenceScores": {
                      "positive": 0.01,
                      "negative": 0.99
                    },
                    "offset": 4,
                    "length": 4,
                    "text": "food",
                    "relations": [
                      {
                        "relationType": "assessment",
                        "ref": "#/documents/0/sentences/0/assessments/0"
                      }
                    ]
                  },
                  {
                    "sentiment": "negative",
                    "confidenceScores": {
                      "positive": 0.01,
                      "negative": 0.99
                    },
                    "offset": 13,
                    "length": 7,
                    "text": "service",
                    "relations": [
                      {
                        "relationType": "assessment",
                        "ref": "#/documents/0/sentences/0/assessments/0"
                      }
                    ]
                  }
                ],
                "assessments": [
                  {
                    "sentiment": "negative",
                    "confidenceScores": {
                      "positive": 0.01,
                      "negative": 0.99
                    },
                    "offset": 26,
                    "length": 12,
                    "text": "unacceptable",
                    "isNegated": false
                  }
                ]
              },
              {
                "sentiment": "positive",
                "confidenceScores": {
                  "positive": 0.86,
                  "neutral": 0.08,
                  "negative": 0.07
                },
                "offset": 40,
                "length": 32,
                "text": "The concierge was nice, however.",
                "targets": [
                  {
                    "sentiment": "positive",
                    "confidenceScores": {
                      "positive": 1.0,
                      "negative": 0.0
                    },
                    "offset": 44,
                    "length": 9,
                    "text": "concierge",
                    "relations": [
                      {
                        "relationType": "assessment",
                        "ref": "#/documents/0/sentences/1/assessments/0"
                      }
                    ]
                  }
                ],
                "assessments": [
                  {
                    "sentiment": "positive",
                    "confidenceScores": {
                      "positive": 1.0,
                      "negative": 0.0
                    },
                    "offset": 58,
                    "length": 4,
                    "text": "nice",
                    "isNegated": false
                  }
                ]
              }
            ],
            "warnings": []
          }
        ],
        "errors": [],
        "modelVersion": "2022-11-01"
      }
    }

语言检测器

语言检测器评估每个文档的文本输入,并返回具有指示分析强度的分数的语言标识符。 此功能对于收集任意文本的内容存储非常有用,其中语言未知。 请参阅语言检测支持的语言以获取支持的语言列表。

post_body = {
    "kind": "LanguageDetection",
    "parameters": {
        "modelVersion": "latest"
    },
    "analysisInput":{
        "documents":[
            {
                "id":"1",
                "text": "This is a document written in English."
            }
        ]
    }
}

post_headers["x-ms-workload-resource-moniker"] = str(uuid.uuid1())
response = requests.post(service_url, json=post_body, headers=post_headers)

# Output all information of the request process
printresponse(response)

输出

    HTTP 200
    {
      "kind": "LanguageDetectionResults",
      "results": {
        "documents": [
          {
            "id": "1",
            "detectedLanguage": {
              "name": "English",
              "iso6391Name": "en",
              "confidenceScore": 0.99
            },
            "warnings": []
          }
        ],
        "errors": [],
        "modelVersion": "2022-10-01"
      }
    }

关键短语提取程序

关键短语提取计算非结构化文本并返回关键短语的列表。 如果需要快速识别文档集合中的要点,此功能非常有用。 请参阅关键短语提取支持的语言以获取支持的语言列表。

post_body = {
    "kind": "KeyPhraseExtraction",
    "parameters": {
        "modelVersion": "latest"
    },
    "analysisInput":{
        "documents":[
            {
                "id":"1",
                "language":"en",
                "text": "Dr. Smith has a very modern medical office, and she has great staff."
            }
        ]
    }
}

post_headers["x-ms-workload-resource-moniker"] = str(uuid.uuid1())
response = requests.post(service_url, json=post_body, headers=post_headers)

# Output all information of the request process
printresponse(response)

输出

    HTTP 200
    {
      "kind": "KeyPhraseExtractionResults",
      "results": {
        "documents": [
          {
            "id": "1",
            "keyPhrases": [
              "modern medical office",
              "Dr. Smith",
              "great staff"
            ],
            "warnings": []
          }
        ],
        "errors": [],
        "modelVersion": "2022-10-01"
      }
    }

命名实体识别 (NER)

命名实体识别(NER)能够识别文本中的不同实体,并将其分类为预定义类或类型,例如:人员、位置、事件、产品和组织。 有关已启用的语言列表,请参阅 NER 语言支持

post_body = {
    "kind": "EntityRecognition",
    "parameters": {
        "modelVersion": "latest"
    },
    "analysisInput":{
        "documents":[
            {
                "id":"1",
                "language": "en",
                "text": "I had a wonderful trip to Seattle last week."
            }
        ]
    }
}

post_headers["x-ms-workload-resource-moniker"] = str(uuid.uuid1())
response = requests.post(service_url, json=post_body, headers=post_headers)

# Output all information of the request process
printresponse(response)

输出

    HTTP 200
    {
      "kind": "EntityRecognitionResults",
      "results": {
        "documents": [
          {
            "id": "1",
            "entities": [
              {
                "text": "trip",
                "category": "Event",
                "offset": 18,
                "length": 4,
                "confidenceScore": 0.74
              },
              {
                "text": "Seattle",
                "category": "Location",
                "subcategory": "GPE",
                "offset": 26,
                "length": 7,
                "confidenceScore": 1.0
              },
              {
                "text": "last week",
                "category": "DateTime",
                "subcategory": "DateRange",
                "offset": 34,
                "length": 9,
                "confidenceScore": 0.8
              }
            ],
            "warnings": []
          }
        ],
        "errors": [],
        "modelVersion": "2021-06-01"
      }
    }

实体链接

本节中没有 REST API 的步骤。