SQL Server Performance Forum – Threads Archive
BCP UtilityHi! I have the following problem: I am transfering big tables (about 100 – 400 mb) from a central sql server to several clients. Since there is a 2 mbit line between server and client i use compressed textfiles (bcp native mode) to accomplish this in a acceptable time.
When iÂ´m inserting the data into my tables iÂ´d like to verify my transfer with a stored procedure. The procedures work, if i start them manually. When i start them automatically after the bcp utility has finished the procedure returns NULL.
I think my problem is the server needs some time to rebuild the index of these tables.
So iÂ´m looking for a way to monitor the sql server (which is quite busy after the bcp in) if heÂ´s finished so that my check sp can validate if the transfer was successful. Anyone any idea? greetings
You could use a query that checks the rowcount of the tables on the source server and compares it to the rowcount of the tables on the destination servers (using linked servers). Check if the rowcounts match and if they do, the process has finished. That may be too simplistic, but it may be what you’re after. Tom Pullen
DBA, Oxfam GB
Use PROFILER to monitor the activity during this process. Satya SKJ
Hi! Thank you for your hints! -> The rowcount method – i tried this, but i didnÂ´t get any results beacuse the sql server is quite busy when it inserts the data into the destination table (rowcount times out). I think the index rebuld after the insertation of the data is respnsible for this. I also tried to monitor the sysprocesses table to get a hand on the sqlserver processes – it also didnÂ´t work (timeout in the interesting timeframe). -> As for the Profiler: this transfer should take place fully automatic, user triggered. So i will try this to gain any new infos, but iÂ´m looking for a way to automate this so is there a way to do this? (My cliets are MSDEÂ´s with a vb client-tool) IÂ´ve read that it might be usefull to drop the indices of my tables before i insert the data, but iÂ´m a little afraid to do that, because if i insert inconsistent data there might be dublicate records which will prevent a index rebuld whic is not good (this shouldÂ´nt happen, but youÂ´ll never know…). Another way might be to limit bcpÂ´s batch size to 1000 rows. But this, since every 1000 rows a transaction ends the indexes will also become rebuilt. Since our pkÂ´s are very compley this needÂ´s some time. So iÂ´ve major performance concerns about using this. ThankÂ´s again for your ideas, iÂ´m looking forward to continue this discussion! greeting
If you have Primary/Unique key constraint on the table then no duplicate rows will be allowed.
The best is to reindex after the load so that the statistics will be updated against this table. Anyway BCP is limited logged operation and batch of 1000 is not a problem to execute. HTH Satya SKJ