In this tutorial, you learn how to detect liveness in faces, using a combination of server-side code and a client-side mobile application.
Tip
For general information about face liveness detection, see the conceptual guide.
This tutorial demonstrates how to operate a frontend application and an app server to perform liveness detection, including the optional step of face verification, across various platforms and languages.
Important
The Face client SDKs for liveness are a gated feature. You must request access to the liveness feature by filling out the Face Recognition intake form. When your Azure subscription is granted access, you can download the Face liveness SDK.
Tip
After you complete the prerequisites, you can get started faster by building and running a complete frontend sample (either on iOS, Android, or Web) from the SDK samples folder.
Your Azure account must have a Cognitive Services Contributor role assigned in order for you to agree to the responsible AI terms and create a resource. To get this role assigned to your account, follow the steps in the Assign roles documentation, or contact your administrator.
Once you have your Azure subscription, create a Face resource in the Azure portal to get your key and endpoint. After it deploys, select Go to resource.
You need the key and endpoint from the resource you create to connect your application to the Face service.
You can use the free pricing tier (F0) to try the service, and upgrade later to a paid tier for production.
Access to the Azure AI Vision Face Client SDK for Mobile (IOS and Android) and Web. To get started, you need to apply for the Face Recognition Limited Access features to get access to the SDK. For more information, see the Face Limited Access page.
Familiarity with the Face liveness detection feature. See the conceptual guide.
Prepare SDKs
We provide SDKs in different languages to simplify development on frontend applications and app servers:
Download SDK for frontend application
Follow instructions in the azure-ai-vision-sdk GitHub repository to integrate the UI and the code into your native mobile application. The liveness SDK supports Java/Kotlin for Android mobile applications, Swift for iOS mobile applications and JavaScript for web applications:
For Swift iOS, follow the instructions in the iOS sample
For Kotlin/Java Android, follow the instructions in the Android sample
For JavaScript Web, follow the instructions in the Web sample
Once you've added the code into your application, the SDK handles starting the camera, guiding the end-user in adjusting their position, composing the liveness payload, and calling the Azure AI Face cloud service to process the liveness payload.
You can monitor the Releases section of the SDK repo for new SDK version updates.
Download Azure AI Face client library for app server
The app server/orchestrator is responsible for controlling the lifecycle of a liveness session. The app server has to create a session before performing liveness detection, and then it can query the result and delete the session when the liveness check is finished. We offer a library in various languages for easily implementing your app server. Follow these steps to install the package you want:
For C#, follow the instructions in the dotnet readme
For Java, follow the instructions in the Java readme
For Python, follow the instructions in the Python readme
To create environment variables for your Azure Face service key and endpoint, see the quickstart
Perform liveness detection
The high-level steps involved in liveness orchestration are illustrated below:
The frontend application starts the liveness check and notifies the app server.
The app server creates a new liveness session with Azure AI Face Service. The service creates a liveness-session and responds back with a session-authorization-token. More information regarding each request parameter involved in creating a liveness session is referenced in Liveness Create Session Operation.
var endpoint = new Uri(System.Environment.GetEnvironmentVariable("FACE_ENDPOINT"));
var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("FACE_APIKEY"));
var sessionClient = new FaceSessionClient(endpoint, credential);
var createContent = new CreateLivenessSessionContent(LivenessOperationMode.Passive)
{
DeviceCorrelationId = "723d6d03-ef33-40a8-9682-23a1feb7bccd",
EnableSessionImage = true,
};
var createResponse = await sessionClient.CreateLivenessSessionAsync(createContent);
var sessionId = createResponse.Value.SessionId;
Console.WriteLine($"Session created.");
Console.WriteLine($"Session id: {sessionId}");
Console.WriteLine($"Auth token: {createResponse.Value.AuthToken}");
The SDK then starts the camera, guides the user to position correctly, and then prepares the payload to call the liveness detection service endpoint.
The SDK calls the Azure AI Vision Face service to perform the liveness detection. Once the service responds, the SDK notifies the frontend application that the liveness check has been completed.
The frontend application relays the liveness check completion to the app server.
The app server can now query for the liveness detection result from the Azure AI Vision Face service.
Combining face verification with liveness detection enables biometric verification of a particular person of interest with an added guarantee that the person is physically present in the system.
There are two parts to integrating liveness with verification:
Step 2 - Set up the orchestration of liveness with verification.
The high-level steps involved in liveness with verification orchestration are illustrated below:
Providing the verification reference image by either of the following two methods:
The app server provides the reference image when creating the liveness session. More information regarding each request parameter involved in creating a liveness session with verification is referenced in Liveness With Verify Create Session Operation.
var endpoint = new Uri(System.Environment.GetEnvironmentVariable("FACE_ENDPOINT"));
var credential = new AzureKeyCredential(System.Environment.GetEnvironmentVariable("FACE_APIKEY"));
var sessionClient = new FaceSessionClient(endpoint, credential);
var createContent = new CreateLivenessWithVerifySessionContent(LivenessOperationMode.Passive)
{
DeviceCorrelationId = "723d6d03-ef33-40a8-9682-23a1feb7bccd",
EnableSessionImage = true,
};
using var fileStream = new FileStream("test.png", FileMode.Open, FileAccess.Read);
var createResponse = await sessionClient.CreateLivenessWithVerifySessionAsync(createContent, fileStream);
var sessionId = createResponse.Value.SessionId;
Console.WriteLine("Session created.");
Console.WriteLine($"Session id: {sessionId}");
Console.WriteLine($"Auth token: {createResponse.Value.AuthToken}");
Console.WriteLine("The reference image:");
Console.WriteLine($" Face rectangle: {createResponse.Value.VerifyImage.FaceRectangle.Top}, {createResponse.Value.VerifyImage.FaceRectangle.Left}, {createResponse.Value.VerifyImage.FaceRectangle.Width}, {createResponse.Value.VerifyImage.FaceRectangle.Height}");
Console.WriteLine($" The quality for recognition: {createResponse.Value.VerifyImage.QualityForRecognition}");
Perform other face operations after liveness detection
Optionally, you can do further face operations after the liveness check, such as face analysis (to get face attributes, for example) and/or face identity operations.
To enable this, you'll need to set the "enableSessionImage" parameter to "true" during the Session-Creation step.
After the session completes, you can extract the "sessionImageId" from the Session-Get-Result step.