User could debug the copy data activity in pipeline successfully but sometimes fail

Yujin Zhu (Shanghai Wicresoft Co Ltd) 0 Reputation points Microsoft Vendor
2024-12-20T01:56:41.6+00:00

Problem Description: Recently, when users run the Copy Data Activity in Azure Data Factory, the debug operations are successful, but sometimes they fail. There have been no changes to the pipeline configuration. Both the source and sink used by the users are Azure Synapse Analytics.  Users have already encountered this issue, and they want to avoid it as much as possible because it has affected their actual production.

More than one user has encountered this issue. Users have recently experienced errors during pipeline execution:

Operation on target Copy data1 failed: Failure happened on 'Source' side. 'Type=Microsoft.Data.SqlClient.SqlException,Message=A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.),Source=Framework Microsoft SqlClient Data Provider,''Type=System.ComponentModel.Win32Exception,Message=An existing connection was forcibly closed by the remote host,Source=,'

Operation on target Copy data1 failed: Failure happened on 'Source' side. 'Type=Microsoft.Data.SqlClient.SqlException,Message=Invalid object name 'dimensionattributelevelvalue'.,Source=Framework Microsoft SqlClient Data Provider

My guess:
1 It might be related to the APIs in the Synapse Link service, the full qualified domain name (APIs) might be unstable or undergoing updates, but I'm not certain.
2 The amount of data in the Copy Data Activity is very large. I'm not sure if it's because the large amount of data is causing the pipeline to run for too long, and the prolonged runtime increases the instability of the Synapse Link service connections, leading to occasional debugging failures in the pipeline.

User would like to know: 1 What are the reasons that cause the pipeline to sometimes fail to run? User needs the result for his users.

User would like to ask:

1 There have been no changes to the pipeline configuration, how can we resolve the issue of the pipeline sometimes throwing errors during debugging?
2 User speculated that the current issue may be related to the large volume of data, which has resulted in longer pipeline execution times and potentially increased the risk of errors during the execution process. Hereby, I would like to politely inquire whether it is possible to reduce the pipeline runtime by optimizing the pipeline configuration or adjusting data processing strategies? If there are such solutions, we would very much like to learn about the specific methods of operation.

User's image

User's image

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
11,047 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Chandra Boorla 5,755 Reputation points Microsoft Vendor
    2024-12-20T03:06:13.7766667+00:00

    @Yujin Zhu (Shanghai Wicresoft Co Ltd)

    Greetings & Welcome to Microsoft Q&A forum! Thanks for posting your query!

    Problem Description: Recently, when users run the Copy Data Activity in Azure Data Factory, the debug operations are successful, but sometimes they fail. There have been no changes to the pipeline configuration. Both the source and sink used by the users are Azure Synapse Analytics.  Users have already encountered this issue, and they want to avoid it as much as possible because it has affected their actual production.

    Retry Mechanism - Implement retry logic in the Copy Data Activity settings. This will automatically retry the activity a specified number of times if it fails due to transient errors.

    Monitor ADF and Synapse - Use Azure Monitor to track pipeline runs, activity durations, and Synapse resource utilization. This can help identify resource bottlenecks or network issues.

    Optimize Source Query - If applicable, optimize the query used to extract data from the source Synapse database. Use appropriate indexes, avoid unnecessary joins, and consider using views or stored procedures.

    Increase Timeout - If you suspect timeout issues, increase the timeout setting for the Copy Data Activity. However, optimizing the query or data processing is generally a better long-term solution.

    Staged Copying - Break down large data transfers into smaller batches or stages. This can improve stability and reduce the impact of transient errors.

    Operation on target Copy data1 failed: Failure happened on 'Source' side. 'Type=Microsoft.Data.SqlClient.SqlException,Message=A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.),Source=Framework Microsoft SqlClient Data Provider,''Type=System.ComponentModel.Win32Exception,Message=An existing connection was forcibly closed by the remote host,Source=,' Operation on target Copy data1 failed: Failure happened on 'Source' side. 'Type=Microsoft.Data.SqlClient.SqlException,Message=Invalid object name 'dimensionattributelevelvalue'.,Source=Framework Microsoft SqlClient Data Provider

    There are several potential reasons why the Copy Data Activity in Azure Data Factory might sometimes fail during debugging, even without changes to the pipeline configuration. Here are some insights:

    Transport-Level Errors - Error: “A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.)”

    Root Cause - This typically indicates network instability, such as - Temporary interruptions in connectivity between Azure Data Factory (ADF) and the Synapse source. Throttling or resource contention in Synapse Analytics during query execution. A timeout due to large datasets or prolonged query execution.

    Ensure that there are no ongoing network issues or service disruptions affecting Azure Data Factory or Azure Synapse Analytics. You can check the Azure Service Health dashboard for any alerts.

    Invalid Object Name - Error: “Invalid object name 'dimensionattributelevelvalue'.”

    Root Cause - This error indicates that the specified table (dimensionattributelevelvalue) is not accessible at runtime, possibly due to - The table does not exist in the expected database or schema. Permissions issues preventing access. Dynamic table names used in the pipeline that resolve incorrectly. Schema changes or object renames that occurred without updating the pipeline.

    Double-check the configuration of the Copy Data Activity, particularly the source and sink settings. Ensure that the table name dimensionattributelevelvalue is correctly specified and exists in the database. Verify that the correct schema is being referenced.

    1 It might be related to the APIs in the Synapse Link service, the full qualified domain name (APIs) might be unstable or undergoing updates, but I'm not certain.

    Thank you for your insights regarding the intermittent failures you and other users have been experiencing with the Copy Data Activity in Azure Data Factory. Your observation about the possibility of instability in the APIs of the Synapse Link service is certainly worth considering.

    Potential API Instability -

    • API Updates or Maintenance - It’s possible that the APIs used by the Synapse Link service are undergoing updates or maintenance, which could lead to temporary instability. This can affect connectivity and result in errors during pipeline execution.
    • Full Qualified Domain Name (FQDN) Issues - If the FQDNs for the APIs are experiencing issues, it could lead to transport-level errors, as you mentioned. This might manifest as connection drops or timeouts during data transfer.

    2 The amount of data in the Copy Data Activity is very large. I'm not sure if it's because the large amount of data is causing the pipeline to run for too long, and the prolonged runtime increases the instability of the Synapse Link service connections, leading to occasional debugging failures in the pipeline.

    Your insights regarding the large amount of data being processed in the Copy Data Activity. Your concern about how this might be affecting the stability of the Synapse Link service connections and leading to occasional debugging failures is very valid. Here are few recommendations to mitigate Issues:

    • Incremental Data Loads - Instead of processing the entire dataset at once, implement incremental loading to only transfer new or changed data. This reduces the volume of data being processed in each run and can significantly improve performance.
    • Batch Processing - If possible, break the data into smaller batches. This can help manage the load on the pipeline and reduce the risk of timeouts or connection issues.
    • Optimize Data Integration Units (DIUs) - Adjust the DIUs in your Copy Data Activity to increase throughput. Higher DIUs can improve performance but be mindful of the cost implications.
    • Use Parallelism - If your data can be partitioned, consider using parallel copy activities to process multiple partitions simultaneously. This can help speed up the overall data transfer process.

    What are the reasons that cause the pipeline to sometimes fail to run? User needs the result for his users.

    Below are some of the most common causes of pipeline failures:

    • Transient Errors
    • Configuration Errors
    • Data Type Mismatches
    • Resource Throttling
    • Schema Changes
    • Permission Issues
    • Timeouts
    • Dependency Failures
    • Data Quality Issues

    User would like to ask: 1 There have been no changes to the pipeline configuration, how can we resolve the issue of the pipeline sometimes throwing errors during debugging? 2 User speculated that the current issue may be related to the large volume of data, which has resulted in longer pipeline execution times and potentially increased the risk of errors during the execution process. Hereby, I would like to politely inquire whether it is possible to reduce the pipeline runtime by optimizing the pipeline configuration or adjusting data processing strategies? If there are such solutions, we would very much like to learn about the specific methods of operation.

    To address these issues, consider the following recommendations:

    Optimize Pipeline Configuration - Review the pipeline configuration for any opportunities to optimize performance. This could include:

    • Breaking down large datasets into smaller batches to reduce the load during each execution.
    • Using parallel processing where applicable to speed up data transfer.

    Adjust Data Processing Strategies - Implement strategies such as:

    • Utilizing fault tolerance settings in the Copy Data Activity to handle transient errors more gracefully.
    • Monitoring and adjusting the integration runtime settings to ensure they are appropriately scaled for the workload.

    Monitoring and Logging - Enhance monitoring and logging to capture detailed error messages and execution metrics. This will help identify patterns or specific conditions under which failures occur.

    By implementing these strategies, you may be able to reduce the frequency of errors during debugging and improve the overall reliability of the pipeline.

    I hope this information helps. Please do let us know if you have any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.