Select a domain for a Custom Vision project
This guide shows you how to select a domain for your project in the Custom Vision Service. Domains are used as the starting point for your project.
Sign in to your account on the Custom Vision website, then select your project. Select the Settings icon at the top right. On the Project Settings page, you can choose a model domain. You should choose the domain that's closest to your use case scenario. If you're accessing Custom Vision through a client library or REST API, you need to specify a domain ID when creating the project. You can get a list of domain IDs by using a Get Domains request. Or, use the following table.
Image classification domains
Domain | ID | Purpose |
---|---|---|
General | ee85a74c-405e-4adc-bb47-ffa8ca0c9f31 |
Optimized for a broad range of image classification tasks. If none of the other specific domains are appropriate, or if you're unsure of which domain to choose, select one of the General domains. |
General [A1] | a8e3c40f-fb4a-466f-832a-5e457ae4a344 |
Optimized for better accuracy with comparable inference time as the General domain. Recommended for larger datasets or more difficult user scenarios. This domain requires more training time. |
General [A2] | 2e37d7fb-3a54-486a-b4d6-cfc369af0018 |
Optimized for better accuracy with faster inference time than General [A1] and General domains. Recommended for most datasets. This domain requires less training time than General and General [A1] domains. |
Food | c151d5b5-dd07-472a-acc8-15d29dea8518 |
Optimized for photographs of dishes as you would see them on a restaurant menu. If you want to classify photographs of individual fruits or vegetables, use the Food domain. |
Landmarks | ca455789-012d-4b50-9fec-5bb63841c793 |
Optimized for recognizable landmarks, both natural and artificial. This domain works best when the landmark is clearly visible in the photograph. This domain works even if the landmark is slightly obstructed by people in front of it. |
Retail | b30a91ae-e3c1-4f73-a81e-c270bff27c39 |
Optimized for images that are found in a shopping catalog or shopping website. If you want high-precision classifying between dresses, pants, and shirts, use this domain. |
Compact domains | Optimized for the constraints of real-time classification on edge devices. |
Note
The General [A1] and General [A2] domains can be used for a broad set of scenarios and are optimized for accuracy. Use the General [A2] model for better inference speed and shorter training time. For larger datasets, you might want to use General [A1] to render better accuracy than General [A2], though it requires more training and inference time. The General model requires more inference time than both General [A1] and General [A2].
Object detection domains
Domain | ID | Purpose |
---|---|---|
General | da2e3a8a-40a5-4171-82f4-58522f70fbc1 |
Optimized for a broad range of object detection tasks. If none of the other domains are appropriate, or you're unsure of which domain to choose, select the General domain. |
General [A1] | 9c616dff-2e7d-ea11-af59-1866da359ce6 |
Optimized for better accuracy with comparable inference time as the General domain. Recommended for more accurate region location needs, larger datasets, or more difficult user scenarios. This domain requires more training time, and results aren't deterministic: expect a +-1% mean Average Precision (mAP) difference with the same training data provided. |
Logo | 1d8ffafe-ec40-4fb2-8f90-72b3b6cecea4 |
Optimized for finding brand logos in images. |
Products on shelves | 3780a898-81c3-4516-81ae-3a139614e1f3 |
Optimized for detecting and classifying products on shelves. |
Compact domains | Optimized for the constraints of real-time object detection on edge devices. |
Compact domains
The models generated by compact domains can be exported to run locally. In the Custom Vision 3.4 public preview API, you can get a list of the exportable platforms for compact domains by calling the GetDomains API.
All of the following domains support export in ONNX, TensorFlow, TensorFlowLite, TensorFlow.js, CoreML, and VAIDK formats, with the exception that the Object Detection General (compact) domain doesn't support VAIDK.
Model performance varies by selected domain. In the following table, we report the model size and inference time on Intel Desktop CPU and NVIDIA GPU [1]. These numbers don't include preprocessing and postprocessing time.
Task | Domain | ID | Model Size | CPU inference time | GPU inference time |
---|---|---|---|---|---|
Classification | General (compact) | 0732100f-1a38-4e49-a514-c9b44c697ab5 |
6 MB | 10 ms | 5 ms |
Classification | General (compact) [S1] | a1db07ca-a19a-4830-bae8-e004a42dc863 |
43 MB | 50 ms | 5 ms |
Object detection | General (compact) | a27d5ca5-bb19-49d8-a70a-fec086c47f5b |
45 MB | 35 ms | 5 ms |
Object detection | General (compact) [S1] | 7ec2ac80-887b-48a6-8df9-8b1357765430 |
14 MB | 27 ms | 7 ms |
Note
The General (compact) domain for object detection requires special postprocessing logic. For details, please see an example script in the exported zip package. If you need a model without the postprocessing logic, use General (compact) [S1].
Important
There's no guarantee that the exported models give the exactly same result as the Prediction API on the cloud. Slight difference in the running platform or the preprocessing implementation can cause larger difference in the model outputs. For details about the preprocessing logic, see Quickstart: Create an image classification project.
[1] Intel Xeon E5-2690 CPU and NVIDIA Tesla M60
Related content
Follow a quickstart to get started creating and training a Custom Vision project.