Hi @Rabin Chaudhary
Welcome to Microsoft Q&A, thanks for posting your query.
The D365 connector in Azure Data Factory (ADF) is primarily designed to work with entities that are stored in a structured format within the D365 environment. However, the specific data you mentioned (like email sent, delivered, clicked, opened, etc.) is often calculated or aggregated in real-time, which can pose challenges for direct extraction using ADF.
- The data you are trying to extract may not be stored as static records in D365 but rather calculated on-the-fly based on user interactions. This means that the D365 connector may not be able to access this data directly as it does not exist in a traditional entity format.
- If the data is accessible via the D365 API, it may involve heavy API calls, which can lead to performance issues or rate limiting. This is especially true if you are trying to extract large volumes of data or if the API is not optimized for bulk data extraction.
please refer the below documentation
Yes, it is possible to save the data in a nested structure rather than in a container in ADLS Gen 2. To do this, you can specify the folder path in the sink dataset of your ADF pipeline.
- For example, if you want to save the data in a folder called "Marketing" within the "hot" container, you can specify the folder path as "hot/Marketing" in the sink dataset.
- However, it's important to note that creating a nested structure in ADLS Gen 2 can impact the performance of your data lake. This is because ADLS Gen 2 is optimized for large-scale data storage and retrieval, and creating a large number of nested folders can impact the performance of queries and data processing. It's recommended to keep the folder structure as flat as possible to optimize performance.
Hope the above answer helps! Please let us know do you have any further queries.
Please do not forget to "Accept the answer” and “up-vote” wherever the information provided helps you, this can be beneficial to other community members.