Freigeben über


SSIS Data Flow Task fails with Unspecified Error

We have few SQL Agent jobs on SQL Server 2008 (10.0.2531) and these jobs are executing SSIS packages, which are created using BIDS 2008. The jobs are scheduled to run daily at a low frequency. The intention is to run these jobs like a demon service.

We started getting the "Unspecified Error" at random intervals of time and the jobs always failed at the Data Flow Task only. 

The job's error log has the following message. However, when the error is captured using an event handler in the package, an error code of "-2147467259" is reported.  

Code:

0x80004005

Source:

<Data Flow Task Name> SSIS.Pipeline

Description:

Unspecified error End Error

DTExec:

The package execution returned DTSER_FAILURE(1)

After struggling for two weeks with the error, we finally fixed it by limiting the SQL Server Memory usage. We restricted SQL Server to consume 50% at the max. The reason for doing this is to leave enough memory to other services like Integration Services. Otherwise, over a period of time, SQL Server consumes almost all the available memory and releases it only when demanded by OS.

Here is the exact script we used, which you can tweak based on the percentage you want to allocate depending on your project requirement. Let me know if this blog has helped you.

DECLARE

@TotalMemInGB int

DECLARE

@MinMemInMB int

DECLARE

@MaxMemInMB int

select

@TotalMemInGB =(physical_memory_in_bytes/(1024*1024)/1000) from sys.dm_os_sys_info

SET

@MinMemInMB= (@TotalMemInGB * 0.5) * 1024 -- taking 50% of available memory.

SET

@MaxMemInMB= (@TotalMemInGB * 0.5) * 1024 -- taking 50% of available memory.

EXEC

sp_configure 'show advanced options',1

RECONFIGURE

EXEC

sp_configure 'max server memory', @MaxMemInMB

RECONFIGURE

EXEC

sp_configure 'min server memory',@MinMemInMB

RECONFIGURE

Comments

  • Anonymous
    November 29, 2012
    Hi, these commands should be executed for each process or only once on the server?

  • Anonymous
    March 24, 2014
    Thanks for sharing this, we had the exact same problem and limiting SQL to use 2.5GB out of the available 4GB solved the issue.