@Sherwin Shoujing ZHU Thanks for reaching here!
We need more details to better understand the approach being taken. If the "import and vectorize data" option is used, the system will automatically generate the chunks and chunk_id
s for proper functionality.
However, if the "import data" option is being used, it does not support vector mapping. This makes it unclear if the wizard is being utilized in a way that aligns with its intended functionality.
If the goal is to import pre-chunked data, it would require a programmatic method to map the chunks as individual documents, rather than relying on the one-document-to-many structure that the "import and vectorize data" feature provides.
In this case, a regular indexer should be set up without integrated vectorization or skillsets, assuming all the required chunks and data are already prepared. Field mappings can then be configured as necessary, ensuring they correspond to the correct fields in the schema.
Search over JSON blobs - Azure AI Search | Microsoft Learn
https://learn.microsoft.com/en-us/azure/search/search-indexer-field-mappings
Please let us know.