Performance Issue with Linked Server | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Performance Issue with Linked Server

Hi, I have created a linked server for an MS Access table. I now need to copy the records from the linked server table to my SQL database table. The # of records are 100,000. If I use a simple
Insert Into (Select * from linked server), the query executes for almost 10 minutes. Is there a way to speed this up ?
Have you considered using DTS or BCP to move the data? Both options, if set up correctly, will most likely provide faster performance. Either of these options can be automated to occur on a scheduled basis.
——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
If you have 100,000 records in an Access database, do not expect performance to be great. You *might* be better off writing a VB app that loads the data and then insert into the database, and that’s a maybe
"How do you expect to beat me when I am forever?"
Hi, Thanks for your replies. My problem is that I need to execute the copy from my application (Java Servlets) with different MS Access files (and other file formats) everytime. For this reason, I have used Linked servers. My application simply creates a link server for any file specified by the user, copies the content into a permanent SQL table, from which I can do the required processing. I could not find a way to use DTS that can by dynamically built based on different file formats, different file structures etc. Thanks
Ankit
Instead of inserting everything at once, how about batch inserting 10,000 records at a time? I have used this techique with fairly good success.
"How do you expect to beat me when I am forever?"
Is it possible to move the data from Access to SQL Server, that way, you won’t have a performance problem? Using Access always promotes performance problems. We have outlawed the use of Access for multiple-user programs at our company for that reason. But if you are stuck with Access, another thing to think about is indexing. When you import the data from Access into SQL Server, does the SQL Server table have an index on it? If so, test this. Before doing the INSERT, drop the index, do the INSERT, and the readd the index. In some cases, this can be faster than importing directly into a table with a preexisting index. I don’t know if this will help you in your case, but you won’t know until you test.
——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
Hi, Unfortunately, its not possible to avoid MS Access files. The table in which we are inserting does not have any indexes. Regards
Ankit
Another observation…it almost takes the same time to complete the process even if I execute it from SQL Query Analyzer. The difference between executing it from Java code or SQL Query Analyzer is insignificant. Is there a way to create custom DTS packages from Java code ? I can then try DTS for loading data. I guess that would be faster.
Have a look around athttp://www.sqldts.com, you may get some information.
HTH Satya SKJ
If you have run this in Query Analyzer and it’s still taking the same amount of time, then you can do a couple of things. This might come down to what your server can process, that is the first thing to consider, which obviously means get better hardware. Secondly, try my suggestion of batch inserting 10,000 records, or less, at a time and see if that works better for you.
"How do you expect to beat me when I am forever?"
Just a minor point, but how long does the select * from access take if its run on its own over the linked server ? Ive had many problems with access before, sometimes performance has been improved by doing a compact and repair on the access database (I believe that database never shrinks otherwise, so if youve had a lot of dynamic data which has since been deleted, your mdb might be much bigger than necessary, possibly decreasing performance).

I’ll try with the batch update and see. The compacting of MS Access files is not really under my control since the files are sent by external parties and my application has to ‘read’ these file.
If possible tune them after your recieve at your end, even though its bit tedious. Atleast this will ensure better performance. (just a thought). Satya SKJ

I just thought of another idea. When you INSERT the data, are you using an implicit or explicit transaction? If you are using an implicit transaction, the log has to be written to every time a single row is inserted, which can add time to the INSERT process. But if you include all of the INSERTS inside an explicit transaction, then the logging will occur all at once (when the transaction is committed), reducing disk I/O, and boosting INSERT performance. Here is some sample code to illustrate what I mean: SET NOCOUNT ON –Insert xxx number of rows
DECLARE @Count INT
SET @Count = 1
BEGIN TRANSACTION
WHILE @Count <= xxx
BEGIN
INSERT INTO t1 VALUES(@Count)
SET @Count = @Count + 1
END
COMMIT SET NOCOUNT OFF This code INSERTs xxx number of new rows (of course your INSERT statement will be different). But the point I want to illustrate is the use of the explicit transaction. If you are not using an explicit transaction now, give it a try and let us know what happened. ——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
Sorry for the long delay. We are already using explicit transaction.
Have you done a Profiler trace of this import process to see exactly what is happening. What I would like to know is if there is only one transaction showing up, or very many to perform the import. If the Profiler trace is not too long, perhaps you can post it here for our review.
——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
Yes do a profile trace. And check for sp_cursoropen statments and see if there is one for every row. There most likely is. Have had the same issue a few times when doing the exact same transfer as you do but between two sql servers. The thing is that when you do:
INSERT INTO [myDB].[dbo].[myTable]([myID], [myInfo])
SELECT * FROM myLinkedServer.myOtherDB.dbo.myOtherTable you will see that in profiler you get a sp_cursorfetch for each and every row that is sent! Using a cursor and fetching the rows this way is extremly slow. I don’t know why SQL Server do it like this. If you instead rewrite your query like this:
INSERT INTO [myDB].[dbo].[myTable]([myID], [myInfo])
SELECT * FROM OPENQUERY(myLinkedServer, ‘SELECT * FROM myOtherDB.dbo.myOtherTable’) You will see that the data is sent over in a single batch and not as a slow cursor row by row. This is much faster. By using OPENQUERY you issue a so called "pass-through query" and the processing will be different. Hopefully it’s the same when dealing with Access and not only between two SQL servers and you can use the above query to improve performance. /Argyle
Great suggestion Argyle.
——————
Brad M. McGehee
Webmaster
SQL-Server-Performance.Com
]]>