insert hangs ??? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

insert hangs ???

Hi There I am having strange problems with the following statement.
—————————————————————————————-
insert into MaterialMaster
(SKU, ShortDescription, LongDescription, UnitsInPallet, BaseUOM, EAN, PreferredSupplier) select SKUCode, left(Description, 20),
Description, UnitsInPalette, BaseUnitOfMeasure, Barcode, PreferredSupplierNumber from tImportSKU
—————————————————————————————-
As you can see al it does is insert records from tImportSKU into MaterialMaster. There are only 26,000 rows to insert.
When is execute this statement the processor hits 100% and the box comes to a standstill indefintely until i kill the process. So what i did is drop the constraints and indexes for MaterialMaster as i have seen issue like this with corrupt indexes, it has a primary key constraint and a nonclustered index. This is where it gets really weird, without the constraints and index the above statements takes a fews seconds almost no CPU utilization.Everything is ok. Ok so i thought corrupt index or something, so i re-created the constraint and index, i even tried dropping the table and recreating it and the index and trying the statement again, but everytime the CPU hits 100% and stays there??? I have dropped the nonclustered index leaving the primary key same problem, i have dropped the pimary key and left the nonclustered index, same problem. ONLY WHEN I DROP BOTH THE PRIMARY KEY and NONCLUSTERED INDEX does it work ???? I have done a CHECKDB, and CHECKTABLE on the necessary objects, no errors. Please help i am stumped ? Thanx
Check the datatypes and length of the coloumns of the corresponding tables are same ??
Hi Vishu Yes the primary key is on SKU derived from SKUCode, both are varchar(40), nonclustered index is on Barcode, both varchar(20) on both tables ? Thanx
Perform the insert statement using BEGIN … END statements or create a temp table and use BCP to insert into target table. Take a look at transactions topic under books online. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Perhaps you have a PK (+ clustered index) that is "too wide" – a number of varchar columns combined, which can cause horrible response times. Does the target table have an identity column? If not …
* Add the identity column. In both cases, proceed as follows:
* Drop the existing PK.
* Add PK (clustered index) on identity column.
* Add unique index on the old PK columns to ensure data integrity. If there is an FK relationship referring to your old PK, you may have to drop the FK and replace with triggers, or redefine the child tables to refer to the identity column instead. Obviously this would require a serious interruption of work on a production database, and it may have significant ramifications for client applications accessing the database.
Hi Satya Not sure how it would make a difference but i tried it and same problem. Thanx
Do you have data already in MaterialMaster table before the insert mentioned?
Hi Vishu No MaterialMaster is empty. Adriaan, thanx for the feedback but this hsould just work there should be no need for all the modifications you mentioned. The Primay Key and clustered index on this table are only 1 column SKU and EAN respectively, both varchar length and definitions the same in the tImportSKU table ? This query works in the production server identical query both table identical including indexes, also checked the data no obvious problems. The real question is why woudl insert that works without indexes suddenly hang a server when a primary key or nonclustered index (so basically any index) is added on a column in the table. No error messages , just CPU hits 100% and stays there , and like said no error on CHECKDB and CHECKTABLE for the objects involved, very weird ? Thanx
What’s the fill factor on the PK index? If not set appropriately, then inserting/updating may take excessive amounts of time because of the index re-organization required as you go along.
What is data type of description column in source table? About indexes: Having indexes on long varchar columns, especially clustered index on such column, causes heavy page splits. That’s still not a reason to hang but it causes bad performance.
Adriaan, I guess you missed the info that destination table is empty. I asked this exactly for the reason you mentioned.
Silly question, perhaps, but what happens if you don’t include the short description? So drop the LEFT(Description, 20) expression and the target column. Less silly: what happens if you add WITH (NOLOCK) to the FROM part of the insert query?
Thanx for all the feedback everyone. I am still dumbfound why it works on the production server but perhaps it has something to do with what you all mentioned.
Unfortunately i will only be able to try out the suggestions a bit later, will reply when i have feedback. Thanx
Are the indexes created on the same filegroup? This potentially could be a disk related problem.
Does the insert still produce 100% cpu if you only try to insert one row ?
insert into MaterialMaster
(SKU, ShortDescription, LongDescription, UnitsInPallet, BaseUOM, EAN, PreferredSupplier) select TOP 1 SKUCode, left(Description, 20),
Description, UnitsInPalette, BaseUnitOfMeasure, Barcode, PreferredSupplierNumber
Hi Everyone Ok first Adriaan the fillfactor and padindex are set to server default which is 100%, still i do not hink this is the problem we are talking about 20,000 small records, even if it were true, we are talking about the it taking a minute instaed of a few seconds, this query literally kills the box indefinitely. Mmarovic the desciption column in the source is varchar(100), the short and long description columns in the destination are 20/100 varchar respectively. DStevensTN, yes there is only one filegroup that is obviously the primary filegroup, i also suspected this , but performance monitor shows no disk que length or unordinary disk behaviour, although i was only tracing for que lengthes ? Chappy your comment proved most interesting, yes 1 row works, though trial and error testing i found that i can insert up to the top 500 rows, as soon as i try insert top 501 the problem occurs, so i looked at the data , as the clstered index is SKU i know the order of the result set that will be inserted , row 501 in the tImportSKU table has no strange data whatsover no thin suspicious, so i looked at filegroup space etc, and nothing , files set to auto grow and there is alot of space. So why can i insert top 500 rows but not the top 501 ??? At least we are onto something here ???
Thanx

Change to a more insertion-friendly fill factor, like 80 to 90%, see what happens. Also make sure that the non-clustered indexes have the same fill factor. If you can’t mess with the indexes, then use the following routine to work in batches of 500 rows: SET ROWCOUNT 500 WHILE EXISTS
(SELECT * FROM Source WHERE NOT EXISTS
(SELECT * FROM Target WHERE Target.PK = Source.PK)) BEGIN INSERT INTO Target
SELECT * FROM Source WHERE NOT EXISTS
(SELECT * FROM Target WHERE Target.PK = Source.PK) END SET ROWCOUNT 0
quote:Ok first Adriaan the fillfactor and padindex are set to server default which is 100%, still i do not hink this is the problem we are talking about 20,000 small records, even if it were true, we are talking about the it taking a minute instaed of a few seconds, this query literally kills the box indefinitely.
As long as your destintaion table is empty before the insert, fill-factor settings doesn’t affect the performance. It matters when you create/rebuild index on table with significant enough number of rows.
quote:Chappy your comment proved most interesting, yes 1 row works, though trial and error testing i found that i can insert up to the top 500 rows, as soon as i try insert top 501 the problem occurs, so i looked at the data , as the clstered index is SKU i know the order of the result set that will be inserted , row 501 in the tImportSKU table has no strange data whatsover no thin suspicious, so i looked at filegroup space etc, and nothing , files set to auto grow and there is alot of space.
Well, then batching insert may help. Besides what suggested by Adriaan there are more efficient approaches, but that one looks good enough for only 20000 rows.
quote:So why can i insert top 500 rows but not the top 501 ???
I have no idea.

quote:So why can i insert top 500 rows but not the top 501 ???
Another factor to consider when things slow down is when the database and log files need to grow. Possibly your 500 row insert remains just on the inside of the size increase, and a 501 row insert would exceed it, and/or your file growth setting is too large (10% on a 50 MB file isn’t too much, but wait until the file starts going into GB territory).
Another suggestion <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /><br /><br />So far we are all assuming it is the insert which is hanging.<br /><br />select TOP 501 SKUCode, left(Description, 20),<br />Description, UnitsInPalette, BaseUnitOfMeasure, Barcode, PreferredSupplierNumber<br /><br />What does this do on its own ?<br /><br />
Hi Everyone. Ok Chappy, the insert is defiantely the problem ic an select all the rows from the Source (tImportSKU).
Adriaan, space is definately not the problem, i can insert all 26,000 rows without the indexes, the 26,000 rows will take up more space.
I doubt that the fillfactor has anything to do with it either. I have tried your code Adriaan, but it does nto work because rowcount is set to 500 , only the first 500 rows are return for the subquery of the insert WHERE NOT EXISTS
(SELECT * FROM Target WHERE Target.PK = Source.PK). Therefore it inserts the first 500 rows and the test is always true, it goes into an infinite loop, so i am still getting nowhere. Even if we can get it to insert 500 rows in batches, we still dont know what the problme is ? And i cannot change to code that does the insert to work with batches. I think we are getting to a dead end, but thanx a million guys.
quote:I have tried your code Adriaan, but it does nto work because rowcount is set to 500 , only the first 500 rows are return for the subquery of the insert WHERE NOT EXISTS
(SELECT * FROM Target WHERE Target.PK = Source.PK). Therefore it inserts the first 500 rows and the test is always true, it goes into an infinite loop, so i am still getting nowhere.
Ok, correct code with the same approach would be something like: WHILE EXISTS
(SELECT * FROM Source WHERE NOT EXISTS
(SELECT * FROM Target WHERE Target.PK = Source.PK)) BEGIN INSERT INTO Target
SELECT top 500 * FROM Source WHERE NOT EXISTS
(SELECT * FROM Target WHERE Target.PK = Source.PK) END
Anyway, I still don’t have idea how to explain why insert hangs, but I think index redesign may still help.
]]>