共用方式為


Deploy Manufacturing data solutions using Azure portal

Important

Some or all of this functionality is available as part of a preview release. The content and the functionality are subject to change.

This section provides information on how to deploy Manufacturing data solutions in the designated tenant. The prerequisites must be completed before deploying Manufacturing data solutions to the designated tenant. To deploy Manufacturing data solutions, perform the following steps:

  1. Open the Manufacturing data solutions deployment wizard from the Azure portal using one of two ways:

  2. Select Create to create a new Manufacturing data solutions resource.

  3. Fill the values on the Basics tab and then select Next.

    Image shows how to create a new Manufacturing data solutions resource.

    Setting Description
    Subscription Choose the subscription to deploy the Manufacturing data solutions resource in.
    Resource group Create or choose the resource group where you want to create the Manufacturing data solutions resource.
    Name Name of Manufacturing data solutions resource. This should not exceed 21 characters
    Region The region to deploy the Manufacturing data solutions resource in.
    SKU Select Basic SKU for a Dev/Test release, or Standard for a Production release. For more details, see SKU.
    Entra Application ID Provide the Application ID that you created the Manufacturing data solutions App Registration.
    AKS Admin Group ID Provide the Microsoft Entra ID group ID that you created with a list of owners and members.
    Enable Copilot Select whether or not to deploy the resources needed for Factory Operations Agent in Azure AI.

Note

If trying to deploy with same name in same RG and subscription within the span of 7 days, please purge the earlier instance of App-configuration and then start the deployment.

  1. Fill the values in the Fabric Configuration tab and then select Next. Select Add/Change to choose a managed identity.

    The image shows Fabric Configuration.

    Setting Description
    User Assigned managed identity Provide the User Assigned Identity that is configured to read the secrets in Key Vault.
    This value is required and is used for provisioning resources in your subscription. This value is also required if the Azure OpenAI resource is to be onboarded to fetch necessary details
    Fabric Key Vault URI URI of the Azure Key Vault
    Fabric One Lake URI URI of One Lake created in Fabric workspace
    Fabric One Lake Path Path of Lakehouse created in Fabric workspace
  2. When the agent is enabled from Basics tab, configuration is available to onboard your own Azure OpenAI resource or configure Manufacturing data solutions managed Azure OpenAI deployment. Fill out the details on the Azure OpenAI Configuration screen and then select Review and Create.

Agent configuration - Default

Choose Default if no configuration is required. For this service tries to deploy a model in the following preference order based on availability a) name:gpt-4 ,version:0125-Preview b) name:gpt-4-32k ,version:0613 c) name:gpt-4o,version:2024-05-13.

Agent configuration - Bring your own Azure OpenAI resource**

You can switch during update from user managed Azure OpenAI resource to Manufacturing data solutions managed Azure OpenAI deployment.

Screenshot showing the fields that are required for Bring your own Azure OpenAI Model.

Bring your own Azure OpenAI resource Description
Resource ID The Azure OpenAI resource ID from your deployment.
GPT Model Deployment Name Name of your Azure OpenAI GPT model deployment.
Embedding Model Deployment Name Name of your embedding model deployment.

The Azure OpenAI resource should be present in the same tenant but can be present in any subscription, resource group, and region. Also the User Managed Identity (provided in Basics) should have Cognitive Services OpenAI User role for data plane service to access the models and either Contributor or Owner role for control plane service to access the resource.

Agent configuration - Configure model

Screen showing the fields that are required for configuring LLM model deployment.

Model Configuration Description
GPT Model Name The large language model to use
Capacity value between 5 and 90
GPT Model Version The version of the large language model to use.
GPT Model Capacity The capacity (tokens in thousands per minute) for the large language model.
Embedding Model Capacity The capacity (tokens in thousands per minute) for the embeddings model to use.
Capacity value between 100 and 240

The models available differ per region. The model can also be changed after deployment

The deployment takes around 40 to 50 minutes to complete. Open Azure portal and select Deployments in the newly created resource group to check the status of deployment.

SKU

There are multiple SKU options available to you. Depending on the selected SKU, the capacity of underlying resources like Cosmos DB, Azure Data Explorer (ADX) etc. is procured.

SKU Type SKU Name Azure Data Explorer SKU Cosmos DB RUs (EntityStoreCollection) Function App SKU
Basic Basic_B0 Dev(No SLA)_Standard_E2a_v4 (Manual scale - instance count 1) 10000 RUs EP1
Standard Standard_S0 Standard_E4ads_v5 (Manual Scale - instance count 2) 10000 RUs EP1
Standard Standard_S1 Standard_E4ads_v5 (Optimized Autoscale - instance count 4 to 6) 20000 RUs EP1
Standard Standard_S2 Standard_E4ads_v5 (Optimized Autoscale - instance count 6 to 10) 40000 RUs EP2
Standard Standard_S3 Standard_E8ads_v5 (Optimized Autoscale - instance count 5) 40000 RUs EP3
Standard Standard_S4 Standard_E8ads_v5 (Optimized Autoscale - instance count 5 to 10) 40000 RUs EP3
Standard Standard_S5 Standard_E8d_v5 (Optimized Autoscale - instance count 6 to 20) 100000 RUs EP2

Note

Higher SKUs also offer the capability of running more queries simultaneously for improved performance, these can be considered as implementation details so aren't highlighted in the provided table. The Azure Data Explorer (ADX) settings for the shuffle partitions, maximum concurrent requests, are set to an optimized value. The Event Hub throughput units are optimized, and also the Function App workers have different limits to optimize the ingestion and consumption performance.

While creating Manufacturing data solutions instance:

  • If Basic is selected, system defaults to Basic_B0.
  • If Standard is selected, system defaults to Standard_S2, because high ingestion load is expected post Manufacturing data solutions creation.

After the daily/weekly ingestion load is reduced, you can trigger an Update on Manufacturing data solutions instance and reduce to a lower SKU, this ends up reducing the capacity for Azure Cosmos DB and Azure Data Explorer (ADX).

  • You can't switch from Basic to Standard SKUs.
  • You can switch between Standard SKUs (S0 to S5).

Note

For Basic SKU, infrastructure resources will not be deployed with zone redundancy. We will not have any SLA for this SKU.For Standard SKU, infrastructure resources will be deployed with zone redundancy. They have standard SLAs.

Note

Switching from any Standard SKU to Basic SKU is not supported. The SKU can be updated after deployment.

Configure Azure OpenAI model

During the deployment, if the agent is enabled, two (2) models are deployed, an Azure OpenAI GPT model and Embeddings model.

You can select from the following GPT model:

  • GPT Model name
  • GPT Model version
  • GPT Model capacity

You can change the Embeddings Model Capacity for the embeddings model. However the embeddings model name and version can't be changed and are defaulted to text-embeddings-ada-002 version 2.

Deployment results

After deployment is finished, Manufacturing data solutions is created bundled with supported Azure resources. The Azure resources associated with Manufacturing data solutions are hosted in two new resources groups.

Image of Manufacturing data solutions.

Next steps