Importing huge size of data always failed | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Importing huge size of data always failed

<br />Hi Friends,<br /><br />I’m working with a .Net project, one of the designed forms works as import/export utility.<br /><br />1- In case of export: it exports all tables to an XML file (image fields saved on disk) and deletes all data. All these operations done as one transaction.<br />2- In case of import: it returns the exported data back to SQL Server as one transaction.<br /><br />This works fine with a small size of data, but always fails in case of huge data.<br /><br />For example:<br /><br />- I’m trying to import 2.34 GB of data to SQL Server 2000.<br />- timeout is in SQL server is set to 600 seconds (the default value), the size of one record doesnot exceed 130 KB.<br />- I have 16 GB free space on the drive.<br />- import operation is started, and data is being transfered.<br />- Free space is being decreased on the drive while importing data.<br />- About 10 minutes later, import operation failed. the returned error message to my application is "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.".<br />- The drive has 10.8 GB free space now.<br />- Rollback all operation. Rollback takes a long time for such a transaction.<br /><br />My questions are:<br /><br />IS this becuase we have one transaction on a huge of data data?<br />WHAY do this problem happened? <br />HOW to avoid this problem? So, I can import a huge size of data without problems.<br /><br />Thanks for your time & help … I’m waiting your help friends… [<img src=’/community/emoticons/emotion-6.gif’ alt=’:(‘ />]…<br /><br /><br />
What recovery model are you using? If its set to full, all bulk inserts are logged. If youre happy to reapply all bulk inserts since last backup, in the event of a database failure, then you can switch to bulk logged mode.
Possibly even simple mode if you dont use transaction logs and backup regularly. Assuming the database is set to autogrow, is it set to a megabyte amount, or a percentage? Ive seen timeouts where a large database is still set to grow by 10%, and there was a timeout while this space was allocated. Try setting it to a percentage. Avoid using XML unless absolutely necessary. Its not optimised for processing, but instead, for representation. Finally, try adding a batch size to your bulk insert
Also check TEMPDB contention in this regard. Satya SKJ
Contributing Editor & Forums Moderator
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.