For large tables, it’s helpful to partition or batch data. In the ADF Copy Data activity, try enabling "Data partitioning" and specify partition columns if you have any integer or date columns that can divide the data into manageable chunks.
Try limiting the maximum concurrent connections in the self-hosted integration runtime configuration, which could reduce competition for resources during the copy operation.
Consider upgrading the VM or adding more nodes to your self-hosted integration runtime cluster. With more resources, the runtime may handle the large dataset more effectively.
If partitioning isn't feasible, consider using a paginated query with offsets and limits in the "Source Query" of the Copy Data activity to retrieve data in chunks manually.
Check if the CPU and memory usage are spiking during the transfer. If so, consider increasing the compute size or using additional nodes.