Muokkaa

Jaa


Quickstart: Azure AI Vision v3.2 GA Read

OCR (Read) editions

Important

Select the Read edition that best fits your requirements.

Input Examples Read edition Benefit
Images: General, in-the-wild images labels, street signs, and posters OCR for images (version 4.0) Optimized for general, non-document images with a performance-enhanced synchronous API that makes it easier to embed OCR in your user experience scenarios.
Documents: Digital and scanned, including images books, articles, and reports Document Intelligence read model Optimized for text-heavy scanned and digital documents with an asynchronous API to help automate intelligent document processing at scale.

About Azure AI Vision v3.2 GA Read

Looking for the most recent Azure AI Vision v3.2 GA Read? All future Read OCR enhancements are part of the two services listed previously. There are no further updates to the Azure AI Vision v3.2. For more information, see Call the Azure AI Vision 3.2 GA Read API and Quickstart: Azure AI Vision v3.2 GA Read.

Get started with the Azure AI Vision Read REST API or client libraries. The Read API provides you with AI algorithms for extracting text from images and returning it as structured strings. Follow these steps to install a package to your application and try out the sample code for basic tasks.

Use the optical character recognition (OCR) client library to read printed and handwritten text from an image. The OCR service can read visible text in an image and convert it to a character stream. For more information on text recognition, see the OCR overview. The code in this section uses the latest Azure AI Vision package.

Tip

You can also extract text from a local image. See the ComputerVisionClient methods, such as ReadInStreamAsync. Or, see the sample code on GitHub for scenarios involving local images.

Reference documentation | Library source code | Package (NuGet) | Samples

Prerequisites

  • An Azure subscription - Create one for free.
  • The Visual Studio IDE or current version of .NET Core.
  • An Azure AI Vision resource. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • The key and endpoint from the resource you create to connect your application to the Azure AI Vision service.
    1. After your Azure Vision resource deploys, select Go to resource.
    2. In the left navigation menu, select Keys and Endpoint.
    3. Copy one of the keys and the Endpoint for use later in the quickstart.

Create environment variables

In this example, write your credentials to environment variables on the local machine that runs the application.

Go to the Azure portal. If the resource you created in the Prerequisites section deployed successfully, select Go to resource under Next Steps. You can find your key and endpoint under Resource Management in the Keys and Endpoint page. Your resource key isn't the same as your Azure subscription ID.

To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.

  • To set the VISION_KEY environment variable, replace <your_key> with one of the keys for your resource.
  • To set the VISION_ENDPOINT environment variable, replace <your_endpoint> with the endpoint for your resource.

Important

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

setx VISION_KEY <your_key>
setx VISION_ENDPOINT <your_endpoint>

After you add the environment variables, you may need to restart any running programs that will read the environment variables, including the console window.

Read printed and handwritten text

  1. Create a new C# application.

    Using Visual Studio, create a Console App (.NET Framework) project for C#, Windows, Console.

    After you create a new project, install the client library:

    1. Right-click on the project solution in the Solution Explorer and select Manage NuGet Packages for Solution.
    2. In the package manager that opens, select Browse. Select Include prerelease.
    3. Search for and select Microsoft.Azure.CognitiveServices.Vision.ComputerVision.
    4. In the details dialog box, select your project and select the latest stable version. Then select Install.
  2. From the project directory, open the Program.cs file in your preferred editor or IDE. Replace the contents of Program.cs with the following code.

    using System;
    using System.Collections.Generic;
    using Microsoft.Azure.CognitiveServices.Vision.ComputerVision;
    using Microsoft.Azure.CognitiveServices.Vision.ComputerVision.Models;
    using System.Threading.Tasks;
    using System.IO;
    using Newtonsoft.Json;
    using Newtonsoft.Json.Linq;
    using System.Threading;
    using System.Linq;
    
    namespace ComputerVisionQuickstart
    {
        class Program
        {
            // Add your Computer Vision key and endpoint
            static string key = Environment.GetEnvironmentVariable("VISION_KEY");
            static string endpoint = Environment.GetEnvironmentVariable("VISION_ENDPOINT");
    
            private const string READ_TEXT_URL_IMAGE = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/printed_text.jpg";
    
            static void Main(string[] args)
            {
                Console.WriteLine("Azure Cognitive Services Computer Vision - .NET quickstart example");
                Console.WriteLine();
    
                ComputerVisionClient client = Authenticate(endpoint, key);
    
                // Extract text (OCR) from a URL image using the Read API
                ReadFileUrl(client, READ_TEXT_URL_IMAGE).Wait();
            }
    
            public static ComputerVisionClient Authenticate(string endpoint, string key)
            {
                ComputerVisionClient client =
                  new ComputerVisionClient(new ApiKeyServiceClientCredentials(key))
                  { Endpoint = endpoint };
                return client;
            }
    
            public static async Task ReadFileUrl(ComputerVisionClient client, string urlFile)
            {
                Console.WriteLine("----------------------------------------------------------");
                Console.WriteLine("READ FILE FROM URL");
                Console.WriteLine();
    
                // Read text from URL
                var textHeaders = await client.ReadAsync(urlFile);
                // After the request, get the operation location (operation ID)
                string operationLocation = textHeaders.OperationLocation;
                Thread.Sleep(2000);
    
                // Retrieve the URI where the extracted text will be stored from the Operation-Location header.
                // We only need the ID and not the full URL
                const int numberOfCharsInOperationId = 36;
                string operationId = operationLocation.Substring(operationLocation.Length - numberOfCharsInOperationId);
    
                // Extract the text
                ReadOperationResult results;
                Console.WriteLine($"Extracting text from URL file {Path.GetFileName(urlFile)}...");
                Console.WriteLine();
                do
                {
                    results = await client.GetReadResultAsync(Guid.Parse(operationId));
                }
                while ((results.Status == OperationStatusCodes.Running ||
                    results.Status == OperationStatusCodes.NotStarted));
    
                // Display the found text.
                Console.WriteLine();
                var textUrlFileResults = results.AnalyzeResult.ReadResults;
                foreach (ReadResult page in textUrlFileResults)
                {
                    foreach (Line line in page.Lines)
                    {
                        Console.WriteLine(line.Text);
                    }
                }
                Console.WriteLine();
            }
    
        }
    }
    
  3. As an optional step, see Determine how to process the data. For example, to explicitly specify the latest GA model, edit the ReadAsync call as shown. Skip the parameter or use "latest" to use the most recent GA model.

      // Read text from URL with a specific model version
      var textHeaders = await client.ReadAsync(urlFile,null,null,"2022-04-30");
    
  4. Run the application.

    • From the Debug menu, select Start Debugging.

Output

Azure AI Vision - .NET quickstart example

----------------------------------------------------------
READ FILE FROM URL

Extracting text from URL file printed_text.jpg...


Nutrition Facts Amount Per Serving
Serving size: 1 bar (40g)
Serving Per Package: 4
Total Fat 13g
Saturated Fat 1.5g
Amount Per Serving
Trans Fat 0g
Calories 190
Cholesterol 0mg
ories from Fat 110
Sodium 20mg
nt Daily Values are based on Vitamin A 50%
calorie diet.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to install the OCR client library and use the Read API. Next, learn more about the Read API features.

Use the optical character recognition (OCR) client library to read printed and handwritten text from a remote image. The OCR service can read visible text in an image and convert it to a character stream. For more information on text recognition, see the OCR overview.

Tip

You can also read text from a local image. See the ComputerVisionClientOperationsMixin methods, such as read_in_stream. Or, see the sample code on GitHub for scenarios involving local images.

Reference documentation | Library source code | Package (PiPy) | Samples

Prerequisites

  • An Azure subscription - Create one for free.
  • Python 3.x.
  • Your Python installation should include pip. You can check whether you have pip installed, run pip --version on the command line. Get pip by installing the latest version of Python.
  • An Azure AI Vision resource. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • The key and endpoint from the resource you create to connect your application to the Azure AI Vision service.
    1. After your Azure Vision resource deploys, select Go to resource.
    2. In the left navigation menu, select Keys and Endpoint.
    3. Copy one of the keys and the Endpoint for use later in the quickstart.

Create environment variables

In this example, write your credentials to environment variables on the local machine that runs the application.

Go to the Azure portal. If the resource you created in the Prerequisites section deployed successfully, select Go to resource under Next Steps. You can find your key and endpoint under Resource Management in the Keys and Endpoint page. Your resource key isn't the same as your Azure subscription ID.

To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.

  • To set the VISION_KEY environment variable, replace <your_key> with one of the keys for your resource.
  • To set the VISION_ENDPOINT environment variable, replace <your_endpoint> with the endpoint for your resource.

Important

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

setx VISION_KEY <your_key>
setx VISION_ENDPOINT <your_endpoint>

After you add the environment variables, you may need to restart any running programs that will read the environment variables, including the console window.

Read printed and handwritten text

  1. Install the client library.

    In a console window, run the following command:

    pip install --upgrade azure-cognitiveservices-vision-computervision
    
  2. Install the Pillow library.

    pip install pillow
    
  3. Create a new Python application file, quickstart-file.py. Then open it in your preferred editor or IDE.

  4. Replace the contents of quickstart-file.py with the following code.

    from azure.cognitiveservices.vision.computervision import ComputerVisionClient
    from azure.cognitiveservices.vision.computervision.models import OperationStatusCodes
    from azure.cognitiveservices.vision.computervision.models import VisualFeatureTypes
    from msrest.authentication import CognitiveServicesCredentials
    
    from array import array
    import os
    from PIL import Image
    import sys
    import time
    
    '''
    Authenticate
    Authenticates your credentials and creates a client.
    '''
    subscription_key = os.environ["VISION_KEY"]
    endpoint = os.environ["VISION_ENDPOINT"]
    
    computervision_client = ComputerVisionClient(endpoint, CognitiveServicesCredentials(subscription_key))
    '''
    END - Authenticate
    '''
    
    '''
    OCR: Read File using the Read API, extract text - remote
    This example will extract text in an image, then print results, line by line.
    This API call can also extract handwriting style text (not shown).
    '''
    print("===== Read File - remote =====")
    # Get an image with text
    read_image_url = "https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png"
    
    # Call API with URL and raw response (allows you to get the operation location)
    read_response = computervision_client.read(read_image_url,  raw=True)
    
    # Get the operation location (URL with an ID at the end) from the response
    read_operation_location = read_response.headers["Operation-Location"]
    # Grab the ID from the URL
    operation_id = read_operation_location.split("/")[-1]
    
    # Call the "GET" API and wait for it to retrieve the results 
    while True:
        read_result = computervision_client.get_read_result(operation_id)
        if read_result.status not in ['notStarted', 'running']:
            break
        time.sleep(1)
    
    # Print the detected text, line by line
    if read_result.status == OperationStatusCodes.succeeded:
        for text_result in read_result.analyze_result.read_results:
            for line in text_result.lines:
                print(line.text)
                print(line.bounding_box)
    print()
    '''
    END - Read File - remote
    '''
    
    print("End of Computer Vision quickstart.")
    
    
  5. As an optional step, see Determine how to process the data. For example, to explicitly specify the latest GA model, edit the read statement as shown. Skipping the parameter or using "latest" automatically uses the most recent GA model.

       # Call API with URL and raw response (allows you to get the operation location)
       read_response = computervision_client.read(read_image_url,  raw=True, model_version="2022-04-30")
    
  6. Run the application with the python command on your quickstart file.

    python quickstart-file.py
    

Output

===== Read File - remote =====
The quick brown fox jumps
[38.0, 650.0, 2572.0, 699.0, 2570.0, 854.0, 37.0, 815.0]
Over
[184.0, 1053.0, 508.0, 1044.0, 510.0, 1123.0, 184.0, 1128.0]
the lazy dog!
[639.0, 1011.0, 1976.0, 1026.0, 1974.0, 1158.0, 637.0, 1141.0]

End of Azure AI Vision quickstart.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to install the OCR client library and use the Read API. Next, learn more about the Read API features.

Use the optical character recognition (OCR) client library to read printed and handwritten text with the Read API. The OCR service can read visible text in an image and convert it to a character stream. For more information on text recognition, see the OCR overview.

Tip

You can also read text from a local image. See the ComputerVisionClient methods, such as readInStream. Or, see the sample code on GitHub for scenarios involving local images.

Reference documentation | Package (npm) | Samples

Prerequisites

  • An Azure subscription - Create one for free.
  • The current version of Node.js.
  • An Azure AI Vision resource. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • The key and endpoint from the resource you create to connect your application to the Azure AI Vision service.
    1. After your Azure Vision resource deploys, select Go to resource.
    2. In the left navigation menu, select Keys and Endpoint.
    3. Copy one of the keys and the Endpoint for use later in the quickstart.

Create environment variables

In this example, write your credentials to environment variables on the local machine that runs the application.

Go to the Azure portal. If the resource you created in the Prerequisites section deployed successfully, select Go to resource under Next Steps. You can find your key and endpoint under Resource Management in the Keys and Endpoint page. Your resource key isn't the same as your Azure subscription ID.

To set the environment variable for your key and endpoint, open a console window and follow the instructions for your operating system and development environment.

  • To set the VISION_KEY environment variable, replace <your_key> with one of the keys for your resource.
  • To set the VISION_ENDPOINT environment variable, replace <your_endpoint> with the endpoint for your resource.

Important

If you use an API key, store it securely somewhere else, such as in Azure Key Vault. Don't include the API key directly in your code, and never post it publicly.

For more information about AI services security, see Authenticate requests to Azure AI services.

setx VISION_KEY <your_key>
setx VISION_ENDPOINT <your_endpoint>

After you add the environment variables, you may need to restart any running programs that will read the environment variables, including the console window.

Read printed and handwritten text

Create a new Node.js application.

  1. In a console window, create a new directory for your app, and navigate to it.

    mkdir myapp
    cd myapp
    
  2. Run the npm init command to create a node application with a package.json file. Select Enter for any prompts.

    npm init
    
  3. To install the client library, install the ms-rest-azure and @azure/cognitiveservices-computervision npm package:

    npm install ms-rest-azure
    npm install @azure/cognitiveservices-computervision
    
  4. Install the async module:

    npm install async
    

    Your app's package.json file is updated with the dependencies.

  5. Create a new file, index.js, and open it in a text editor.

  6. Paste the following code into your index.js file.

    'use strict';
    
    const async = require('async');
    const fs = require('fs');
    const https = require('https');
    const path = require("path");
    const createReadStream = require('fs').createReadStream
    const sleep = require('util').promisify(setTimeout);
    const ComputerVisionClient = require('@azure/cognitiveservices-computervision').ComputerVisionClient;
    const ApiKeyCredentials = require('@azure/ms-rest-js').ApiKeyCredentials;
    /**
     * AUTHENTICATE
     * This single client is used for all examples.
     */
    const key = process.env.VISION_KEY;
    const endpoint = process.env.VISION_ENDPOINT;
    
    const computerVisionClient = new ComputerVisionClient(
      new ApiKeyCredentials({ inHeader: { 'Ocp-Apim-Subscription-Key': key } }), endpoint);
    /**
     * END - Authenticate
     */
    
    function computerVision() {
      async.series([
        async function () {
    
          /**
           * OCR: READ PRINTED & HANDWRITTEN TEXT WITH THE READ API
           * Extracts text from images using OCR (optical character recognition).
           */
          console.log('-------------------------------------------------');
          console.log('READ PRINTED, HANDWRITTEN TEXT AND PDF');
          console.log();
    
          // URL images containing printed and/or handwritten text. 
          // The URL can point to image files (.jpg/.png/.bmp) or multi-page files (.pdf, .tiff).
          const printedTextSampleURL = 'https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/ComputerVision/Images/printed_text.jpg';
    
          // Recognize text in printed image from a URL
          console.log('Read printed text from URL...', printedTextSampleURL.split('/').pop());
          const printedResult = await readTextFromURL(computerVisionClient, printedTextSampleURL);
          printRecText(printedResult);
    
          // Perform read and await the result from URL
          async function readTextFromURL(client, url) {
            // To recognize text in a local image, replace client.read() with readTextInStream() as shown:
            let result = await client.read(url);
            // Operation ID is last path segment of operationLocation (a URL)
            let operation = result.operationLocation.split('/').slice(-1)[0];
    
            // Wait for read recognition to complete
            // result.status is initially undefined, since it's the result of read
            while (result.status !== "succeeded") { await sleep(1000); result = await client.getReadResult(operation); }
            return result.analyzeResult.readResults; // Return the first page of result. Replace [0] with the desired page if this is a multi-page file such as .pdf or .tiff.
          }
    
          // Prints all text from Read result
          function printRecText(readResults) {
            console.log('Recognized text:');
            for (const page in readResults) {
              if (readResults.length > 1) {
                console.log(`==== Page: ${page}`);
              }
              const result = readResults[page];
              if (result.lines.length) {
                for (const line of result.lines) {
                  console.log(line.words.map(w => w.text).join(' '));
                }
              }
              else { console.log('No recognized text.'); }
            }
          }
    
          /**
           * 
           * Download the specified file in the URL to the current local folder
           * 
           */
          function downloadFilesToLocal(url, localFileName) {
            return new Promise((resolve, reject) => {
              console.log('--- Downloading file to local directory from: ' + url);
              const request = https.request(url, (res) => {
                if (res.statusCode !== 200) {
                  console.log(`Download sample file failed. Status code: ${res.statusCode}, Message: ${res.statusMessage}`);
                  reject();
                }
                var data = [];
                res.on('data', (chunk) => {
                  data.push(chunk);
                });
                res.on('end', () => {
                  console.log('   ... Downloaded successfully');
                  fs.writeFileSync(localFileName, Buffer.concat(data));
                  resolve();
                });
              });
              request.on('error', function (e) {
                console.log(e.message);
                reject();
              });
              request.end();
            });
          }
    
          /**
           * END - Recognize Printed & Handwritten Text
           */
          console.log();
          console.log('-------------------------------------------------');
          console.log('End of quickstart.');
    
        },
        function () {
          return new Promise((resolve) => {
            resolve();
          })
        }
      ], (err) => {
        throw (err);
      });
    }
    
    computerVision();
    
  7. As an optional step, see Determine how to process the data. For example, to explicitly specify the latest GA model, edit the read statement as shown. Skipping the parameter or using "latest" automatically uses the most recent GA model.

      let result = await client.read(url,{modelVersion:"2022-04-30"});
    
  8. Run the application with the node command on your quickstart file.

    node index.js
    

Output

-------------------------------------------------
READ PRINTED, HANDWRITTEN TEXT AND PDF

Read printed text from URL... printed_text.jpg
Recognized text:
Nutrition Facts Amount Per Serving
Serving size: 1 bar (40g)
Serving Per Package: 4
Total Fat 13g
Saturated Fat 1.5g
Amount Per Serving
Trans Fat 0g
Calories 190
Cholesterol 0mg
ories from Fat 110
Sodium 20mg
nt Daily Values are based on Vitamin A 50%
calorie diet.

-------------------------------------------------
End of quickstart.

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to install the OCR client library and use the Read API. Next, learn more about the Read API features.

Use the optical character recognition (OCR) REST API to read printed and handwritten text.

Note

This quickstart uses cURL commands to call the REST API. You can also call the REST API using a programming language. See the GitHub samples for examples in C#, Python, Java, and JavaScript.

Prerequisites

  • An Azure subscription - Create one for free.
  • cURL installed.
  • An Azure AI Vision resource. You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
  • The key and endpoint from the resource you create to connect your application to the Azure AI Vision service.
    1. After your Azure Vision resource deploys, select Go to resource.
    2. In the left navigation menu, select Keys and Endpoint.
    3. Copy one of the keys and the Endpoint for use later in the quickstart.

Read printed and handwritten text

The optical character recognition (OCR) service can extract visible text in an image or document and convert it to a character stream. For more information on text extraction, see the OCR overview.

Call the Read API

To create and run the sample, do the following steps:

  1. Copy the following command into a text editor.

  2. Make the following changes in the command where needed:

    1. Replace the value of <key> with your key.
    2. Replace the first part of the request URL (https://westcentralus.api.cognitive.microsoft.com/) with the text in your own endpoint URL.

      Note

      New resources created after July 1, 2019, will use custom subdomain names. For more information and a complete list of regional endpoints, see Custom subdomain names for Azure AI services.

    3. Optionally, change the image URL in the request body (https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png) to the URL of a different image to be analyzed.
  3. Open a command prompt window.

  4. Paste the command from the text editor into the command prompt window, and then run the command.

curl -v -X POST "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2/read/analyze" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription key>" --data-ascii "{'url':'https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png'}"

The response includes an Operation-Location header, whose value is a unique URL. You use this URL to query the results of the Read operation. The URL expires in 48 hours.

Optionally, specify the model version

As an optional step, see Determine how to process the data. For example, to explicitly specify the latest GA model, use model-version=2022-04-30 as the parameter. Skipping the parameter or using model-version=latest automatically uses the most recent GA model.

curl -v -X POST "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2/read/analyze?model-version=2022-04-30" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: <subscription key>" --data-ascii "{'url':'https://learn.microsoft.com/azure/ai-services/computer-vision/media/quickstarts/presentation.png'}"

Get Read results

  1. Copy the following command into your text editor.

  2. Replace the URL with the Operation-Location value you copied in the previous procedure.

  3. Replace the value of <key> with your key.

  4. Open a console window.

  5. Paste the command from the text editor into the console window, and then run the command.

    curl -v -X GET "https://westcentralus.api.cognitive.microsoft.com/vision/v3.2/read/analyzeResults/{operationId}" -H "Ocp-Apim-Subscription-Key: {key}" --data-ascii "{body}" 
    

Examine the response

A successful response is returned in JSON. The sample application parses and displays a successful response in the console window, similar to the following example:

{
  "status": "succeeded",
  "createdDateTime": "2021-04-08T21:56:17.6819115+00:00",
  "lastUpdatedDateTime": "2021-04-08T21:56:18.4161316+00:00",
  "analyzeResult": {
    "version": "3.2",
    "readResults": [
      {
        "page": 1,
        "angle": 0,
        "width": 338,
        "height": 479,
        "unit": "pixel",
        "lines": [
          {
            "boundingBox": [
              25,
              14,
              318,
              14,
              318,
              59,
              25,
              59
            ],
            "text": "NOTHING",
            "appearance": {
              "style": {
                "name": "other",
                "confidence": 0.971
              }
            },
            "words": [
              {
                "boundingBox": [
                  27,
                  15,
                  294,
                  15,
                  294,
                  60,
                  27,
                  60
                ],
                "text": "NOTHING",
                "confidence": 0.994
              }
            ]
          }
        ]
      }
    ]
  }
}

Clean up resources

If you want to clean up and remove an Azure AI services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it.

Next steps

In this quickstart, you learned how to call the Read REST API. Next, learn more about the Read API features.

Prerequisites

Read printed and handwritten text

  1. Under Optical character recognition, select Extract text from images.

  2. Under Try it out, acknowledge that this demo incurs usage to your Azure account. For more information, see Azure AI Vision pricing.

  3. Select an image from the available set, or upload your own.

  4. If necessary, select Please select a resource to select your resource.

    After you select your image, the extracted text appears in the output window. You can also select the JSON tab to see the JSON output that the API call returns.

Below the try-it-out experience are next steps to start using this capability in your own application.

Next steps

In this quickstart, you used Vision Studio to access the Read API. Next, learn more about the Read API features.