SQL Server Performance

An Advice for huge data transfer with SQL

Discussion in 'SQL Server 2008 T-SQL Performance Tuning' started by RedAngel, May 11, 2012.

  1. RedAngel New Member

    Hi, I'm a new subscriber and i have a question i hope you can help me to solve
    I think even that someone can use some tips I use here to do the job so i hope to be useful
    here it is:

    I have to backup data from different tables in a different database created on the fly on the same server, selecting only a subset of rows from original tables (i have to move historical data from source database to a new one in order to archive it and free space)

    I'm developing the solution in C# (it doesn't matter, i just say this to explain better) and i send TSQL queries to do the job, building a large TSQL query in a TRY CATCH block, opening a single TRANSACTION while i backup all tables because is important to keep consistency (i delete copied records from source table after the data has been copied)

    Database creation insn't an issue, already solved

    Table creation isn't an issue, i create them using SELECT INTO WHERE 1=0 just to have the basic schema without indexes and constraint (a problem is that it keeps identity info but i solved using IDENTITY INSERT ON before the INSERT statement of real data)

    so where is the problem?
    the main problem is that the INSERT INTO statement i use to copy data take too long (30+ minutes to copy 5.5 Millions of records) and i'm wondering if i can do something to do the job very faster than that

    the query i build is something like this (i exclude db and table creations):

    i start declaring variables and opening transaction

    SET XACT_ABORT ON
    BEGIN TRY
    DECLARE @starttrancount int
    SELECT @starttrancount = @@TRANCOUNT
    DECLARE @ColName varchar(100)
    DECLARE @cursorColNames CURSOR
    DECLARE @columns varchar(2000)
    IF @starttrancount = 0 BEGIN TRANSACTION

    Then for each table i manage, i do something like this:
    first I obtain columns name of the table and hold it in a varchar variable (@columns) to use later
    (I do this because using IDENTITY_INSERT requires columns specification in the INSERT INTO statement)
    then I use EXEC to call a dynamic INSERT INTO using the @columns to specify fields
    (I don't know if there is a better way so this is the first question)
    I use NOLOCK hint on the source table to speed up data fetching

    here i put just a script for a single table, imagine this replied for 6 or 7 tables (the where clause is an example, just consider that each table has a datatime field i query on and it's an Index on the source table)

    SET @cursorColNames = CURSOR FOR select column_name from [SourceDB].INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME ='TableName'
    SET @columns = ''
    OPEN @cursorColNames
    FETCH NEXT FROM @cursorColNames INTO @ColName
    WHILE @@FETCH_STATUS = 0
    BEGIN
    SELECT @columns = @columns + '[' + @ColName + '],'
    FETCH NEXT
    FROM @cursorColNames INTO @ColName
    END
    CLOSE @cursorColNames
    SET @columns = (CASE WHEN @columns='' THEN '' ELSE LEFT(@columns,LEN(@columns)-1)END)

    EXEC('SET IDENTITY_INSERT [DestinationDB].[dbo].[TableName] ON
    INSERT INTO [DestinationDB].[dbo].[TableName] (' + @columns + ')
    SELECT
    ' + @columns + ' FROM
    [SourceDB].[dbo].[TableName] WITH (NOLOCK)
    WHERE
    Date BETWEEN ''2012-01-01T00:00:00'' AND ''2012-01-01T23:59:59''
    SET IDENTITY_INSERT [DestinationDB].[dbo].[TableName] OFF
    ')

    and then i commit transaction or manage error catching and rollback

    IF @starttrancount = 0 COMMIT TRANSACTION
    END TRY
    BEGIN CATCH
    DECLARE @ErrorMessage NVARCHAR(4000);
    DECLARE @ErrorSeverity INT;
    DECLARE @ErrorState INT;

    SELECT
    @ErrorMessage = ERROR_MESSAGE(),
    @ErrorSeverity = ERROR_SEVERITY(),
    @ErrorState = ERROR_STATE();

    IF XACT_STATE() <> 0 AND @starttrancount = 0
    ROLLBACK TRANSACTION
    RAISERROR (@ErrorMessage, -- Message text.
    @ErrorSeverity, -- Severity.
    @ErrorState -- State.
    );
    END CATCH

    I omitted the DELETE statements after the INSERT INTO
    I'd like to point out that i want to transfer data of all tables before commit transaction, i can narrow the datetime. I started using the whole day, now i'm using an hour at a time but i can try even to narrow again, to keep transaction smaller and i suppose less resource expansive

    This is what I do, can someone point me to a smarter solution that speed the job?
    I'd like to have advice about what I'm doing here, thanks!
    I hope this is an exhaustive post :)
    Fabio

    P.S.
    I edited the post because of formatting issue!?

Share This Page