Recently I hit by server memory issue while running SSIS package. I was getting binary data from remote server but the production server was unable to process it because of limited memory.
There are two choices to resolve this. Increase server memory. This wouldn’t solve the problem because the data will grow day by day.
The second is to split the batch into multiple batches say, 300 records per batch. if there are 2000 records then it would be 7 round trips to the remote server to load the data. 6 full 300 and seventh one would be 200.
Here is the design;
I will be using SQL OFFSET FETCH Feature and SSIS Script component for this. For an OLEDB example you can click on the link under Resources.
Declare four variables as follows:
1) vRowCount (Int32): Stores the total number of rows in the source table
2) vRC_IncrementValue (Int32): Stores the number of rows we need to specify in the OFFSET
3) vRC_Increment: (Int32): Stores the number of rows we need to specify in this operation
4) vRCChunkValue (Int32): Specifies the number of rows in each chunk of data in this operation
5) vRCBatchValue (Int32): Specifies the number of rows in each chunk of data
After declaring the variables, we assign a default value for the vRC_ChunkValue variable; in this example, we will set it to 100.
Our select query inside Script source component is like this;
string vSqlStatement = $@"
SELECT *
FROM [dbo].[tblSample]
WHERE 1=1
-- get chunks
ORDER BY SampleID
OFFSET {Variables.vRCIncrementValue} ROWS
FETCH NEXT {Variables.vRCBatchValue} ROWS ONLY";
Next, Add an Execute SQL Task to get the total number of rows from source table and change result set property to Single Row;
Assign the return value to vRowCount variable.
Next, add two expression task to copy values from operating variables to query variables;
Next, Add a For Loop Container, with the following configuration;
Add a data flow task inside For Loop Container. Add a script component and configure it as source;
Configure Data output on Inputs and Outputs tab;
Configure Connection on Connection Manager tab;
Click “Edit Script” and make these changes;
IDTSConnectionManager100 connMgr;
SqlConnection sqlConn;
SqlDataReader sqlReader;
public override void AcquireConnections(object Transaction)
{
//base.AcquireConnections(Transaction);
connMgr = this.Connections.GoldenConn;
sqlConn = (SqlConnection)connMgr.AcquireConnection(null);
}
public override void ReleaseConnections()
{
//base.ReleaseConnections();
connMgr.ReleaseConnection(sqlConn);
}
public override void PreExecute()
{
base.PreExecute();
//create sql statement
string vSqlStatement = $@"
SELECT *
FROM [dbo].[tblSample]
WHERE 1=1
-- get chunks
ORDER BY SampleID
OFFSET {Variables.vRCIncrementValue} ROWS
FETCH NEXT {Variables.vRCBatchValue} ROWS ONLY";
//MessageBox.Show(vSqlStatement);
SqlCommand cmd = new SqlCommand(vSqlStatement, sqlConn);
/*
7200 sec = 120 min = 2 hours. This can be set to 0 for non timeout at all
Will this work? It also depends on server timeout settings. In most SQL install, SQL default timeout for remote queries is 600 seconds, 10 minutes.
*/
cmd.CommandTimeout = 7200;
sqlReader = cmd.ExecuteReader();
}
public override void PostExecute()
{
base.PostExecute();
sqlReader.Close();
}
public override void CreateNewOutputRows()
{
try
{
while (sqlReader.Read())
{
{
SampleDataBuffer.AddRow();
SampleDataBuffer.SampleID = sqlReader.GetString(0);
SampleDataBuffer.AddDate = sqlReader.IsDBNull(7) ? null : sqlReader.GetString(7);
}
}
}
catch (Exception ex)
{
//set to true to cause execution to abort
bool cancel = false;
//raise the error event to SSIS
ComponentMetaData.FireError(-1, "CreateNewOutputRows()", ex.Message, "", -1, out cancel);
}
}
Next, Add Script task inside For Loop container to calculate remaining rows;
Edit Script and add this;
// make sure all rows are accounted for
int rowCount = (int)Dts.Variables["User::vRowCount"].Value;
int rowIncrement = (int)Dts.Variables["User::vRC_Increment"].Value;
int rowChunkValue = (int)Dts.Variables["User::vRC_ChunkValue"].Value;
//this is our new offset value
rowIncrement = rowIncrement + rowChunkValue;
Dts.Variables["User::vRC_IncrementValue"].Value = rowIncrement;
//calculate remaining rows
int remainingRows = rowCount - rowIncrement;
//MessageBox.Show($"RowCount: {rowCount}\nRowIncrmenet: {rowIncrement}\nRowChunkValue{rowChunkValue}\nRemainingRows{remainingRows}");
if ((remainingRows <= rowChunkValue))
{
//short circuit
Dts.Variables["User::vRC_BatchValue"].Value = remainingRows;
//for loop assign expression is [@vRC_Increment = @vRC_Increment + @vRC_ChunkValue], let's reverse this for last loop iteration
Dts.Variables["User::vRC_Increment"].Value = rowIncrement - rowChunkValue;
}
Recently I deployed a package to production and started getting this error;
System.Data.SqlClient.SqlException (0x80131904): Execution Timeout Expired. The timeout period elapsed prior to completion of the operation or the server is not responding. ---> System.ComponentModel.Win32Exception (0x80004005): The wait operation timed out at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.HandleUserException(Exception e) at Microsoft.SqlServer.Dts.Pipeline.ScriptComponentHost.PostExecute() at Microsoft.SqlServer.Dts.Pipeline.ManagedComponentHost.HostPostExecute(IDTSManagedComponentWrapper100 wrapper) ClientConnectionId:b35cae06-1c27-4b61-9a32-6bf173753803 Error Number:-2,State:0,Class:11
After doing some research I found this;
SQL Server remote query timeout default value is 600seconds which is 10 minutes wait. The server that runs the package will wait for this amount of time before terminating remote sessions.
Lets increase this. I am setting this to 7200 seconds that’s 120 min that’s 2 hours. both on client and server.
This might take some time. On my server it took 4 minutes.
Now run SSIS Package without re-deployment. Will it work? If no, then you need to set timeout in destination task;
For script component We can use something like this;
SqlCommand cmd = new SqlCommand(vSqlStatement, sqlConn);
/*
7200 sec = 120 min = 2 hours. This can be set to 0 for non timeout at all
Will this work? It also depends on server timeout settings. In most SQL install, SQL default timeout for remote queries is 600 seconds, 10 minutes.
*/
cmd.CommandTimeout = 7200;
sqlReader = cmd.ExecuteReader();
SSIS moves data using buffers. The default buffer size is 10 megabytes, with a maximum buffer size of 2^31-1 bytes. The default maximum number of rows is 10,000.
IF you look into destination buffer size, it is set to 0, which is default. If server does not support more horsepower then we can change this value to reduce buffer size, say change it from 0 to 500.
There is another good property that can help to adjust buffer size is AutoAdjustBufferSize. Set AutoAdjustBufferSize property to true on DataFlow Task. This helps to improve performance.
OFFSET FETCH is a feature added to the ORDER BY clause beginning with the SQL Server 2012 edition. It can be used to extract a specific number of rows starting from a specific index.
Here is a simple example;
SELECT * FROM [dbo].[tblSample]
order by SampleId
OFFSET 0 ROWS
FETCH NEXT 10 ROWS ONLY
This instruct SQL Server to start from first record and return 10 records only.
Let’s write a dynamic sql to get data in batches; For example we have 7 records from database and we would like to get 3 records on each round trip;
DECLARE @RowCount INT, @RowIncrement INT, @RowsInChunk INT, @RemainingRows INT, @SourceQuery NVARCHAR(255)
DECLARE @RowIncrementValue INT, @BatchValue INT
DECLARE @SQL_Select NVARCHAR(4000);
DECLARE @RowIncrementCheck INT;
SET @RowCount = 7;
SET @RowIncrement = 0;
SET @RowsInChunk = 3;
--initial value
SET @RowIncrementValue = @RowIncrement
SET @BatchValue = @RowsInChunk
--loop through all records
WHILE (@RowIncrement <= @RowCount)
BEGIN
--prepare SQL statement
SET @SQL_Select =
'SELECT *
FROM [dbo].[tblSample]
order by SampleId
OFFSET ' + CAST(@RowIncrementValue AS NVARCHAR) + ' ROWS ' + CHAR(13) +
'FETCH NEXT ' + CAST(@BatchValue AS NVARCHAR) + ' ROWS ONLY'
--EXECUTE sp_executesql @SQL_Select
PRINT @SQL_SELECT
SET @RowIncrement = @RowIncrement + @RowsInChunk
SET @RowIncrementValue = @RowIncrement
--calculate remaining rows
SET @RemainingRows = @RowCount - @RowIncrement
PRINT 'Remaining Rows ' + CAST(@RemainingRows AS NVARCHAR)
IF (@RemainingRows <= @RowsInChunk) AND (@RowIncrement <= @RowCount)
BEGIN
--short circuit this loop
SET @BatchValue = @RemainingRows
SET @RowIncrement = @RowCount
END
END
Here is the result of 3 dynamic sql for 7 records;
SELECT *
FROM [dbo].[tblSample]
order by SampleId
OFFSET 0 ROWS
FETCH NEXT 3 ROWS ONLY
SELECT *
FROM [dbo].[tblSample]
order by SampleId
OFFSET 3 ROWS
FETCH NEXT 3 ROWS ONLY
SELECT *
FROM [dbo].[tblSample]
order by SampleId
OFFSET 6 ROWS
FETCH NEXT 1 ROWS ONLY
In the query above, OFFSET 0 is used to skip 0 rows and FETCH 3 ROWS ONLY is used to extract only 3 rows.
We want to treat lambda expressions as expression trees and look inside them instead of executing them. For example, LINQ to SQL gets the expression and converts it to the equivalent SQL statement and submits it to server (rather than executing the lambda).
With Lambda expressions we can write less code, and the code will be more readable.
A lambda expression uses the lambda operator => which means goes to.
arguments => expession
The expression reads: arguments goes to expression.
Let’s start with a common example;
using System;
namespace Lambdas
{
class Program
{
static void Main()
{
Console.WriteLine(Square(5));
Console.ReadLine();
}
static int Square(int num)
{
return num * num;
}
}
}
we declare a method called Square of type int with one input parameter of type int. It multiplies the number and returns the result.
In the Main() method we call our Square method with an int as a parameter.
We can convert this function into a Lambda expression. This is what it would look like:
num => num * num;
This expression would read: num goes to calculation (num * num).
This expression is short and easily readable once we get used to it. The catch though, is that a Lambda expression must be assigned to a delegate.