SQL Server Performance

creating tables in a trigger - performance

Discussion in 'Performance Tuning for DBAs' started by abc, Jul 2, 2003.

  1. abc New Member

    Hallo All!<br /><br />i have a trigger for insert, that creates tables:<br /><br />CREATE trigger tbl1_insert on dbo.tbl1 after insert as<br />begin<br />...<br />exec('create table ' + @tablename + '...')<br />end <br /><br />if i make ca. 50 000 insert (without commit after every insert), it will be much slower after ca. 40 000 - 50 000 inserts.<br />Any idea?<br /><br />Thanx!<br /><br />(i know that creating tables on the fly is not a good idea, but...i need it <img src='/community/emoticons/emotion-6.gif' alt=':(' /> )
  2. bambola New Member

    Could you explain why you need to create the tables? There could be another solution.

    Bambola.
  3. abc New Member

    for application, that makes database schema for saving of ontologies...the tables are for concepts-instances...there is no other solution, becouse the database must be "readable" and quick
  4. Chappy New Member

    Why cant you model your data in just a small set of tables. Think of how sql server handles its system tables.. sysobjects.. syscolumns etc

    Id really recommend you dont pursue the trigger method if you can at all help it. It may also have heavy consequences in terms of concurrency, if you intend it to be multiuser.
  5. abc New Member

    2 Chappy

    i cant, couse it will be very big tables, queries will have many joins and it will be very slow.
    Concepts have many attributes (columns in database), so i create a table for every concept (different concepts have different attributes)...
  6. satya Moderator

    Are the columns for the table are dynamic?
    If not as suggested not to go for trigger, rather apply script method.
    ..becouse the database must be "readable" and quick ... you may caught in performance issues if trigger method is adopted.

    HTH
    _________
    Satya SKJ
    Moderator
    SQL-Server-Performance.Com
  7. abc New Member

    yes, columns will be created every time by adding an attribute to the concept (there is a table for relations, for this table there is a trigger for insert too, the trigger makes 'alter table add ' + colname)...
  8. gaurav_bindlish New Member

    Did you check the Transaction log free space at the point where your triggers started executing slowly?

    BTW, We have had a similar discussion before in this forum where we had bad performance after 40000-50000 records were inserted, but I am not able to trace it.

    Gaurav
  9. abc New Member

    if i disable the trigger, it does not become slower, it becomes slower only if i create tables in a trigger...if i make a commit after every 1000 inserts, it does not become slower too...
  10. gaurav_bindlish New Member

    I think it is a typical problem of transaction log becoming full. See when you commit after 1000 records, when the next checkpoint occurs, the records are written to datafile and transaction log has free space. But if you do not commit, at checkpoint no records can be written to data file and log keeps on filling up. What is your setting for Transaction log Auto grow? Even if it is enabled, I would recommend keeping an eye on the sixe and % free space in the log. The log should be maintained at the maximum size what it attained due to the insert triggers. This is because, increase in size of log is also a costly operation.

    HTH.

    Gaurav
  11. abc New Member

    thanx!
    could you tell me, how can I see if transaction log was increased? maybe in system monitor?
  12. abc New Member

    and how can i see % free space?
  13. gaurav_bindlish New Member

    From query analyzer you can use, DBCC SQLPERF(LOGSPACE) for monitoring %Logspace used. Alternatively you can use SQL Server Databases counters > Percent Log Used in perfmon. For monitoring Log growths, use counter SQL Server Databases counters > Log Growths in perfmon.


    HTH.

    Gaurav
  14. abc New Member

    thanx! perfmod shows that log is ok (no increasing, no queue), but Disk Queue Length and scans for free memoryspace become very big
  15. gaurav_bindlish New Member

    Have you tested this during the execution of stored procedure for 50,000 rows?

    Gaurav
  16. royv New Member

    This doesn't sound like it would model well in a relational database. If you know C++, you might want to consider building an in-memory database using linked lists. There are some free imdb's on the internet too.


    "How do you expect to beat me when I am forever?"
  17. abc New Member

    2 gaurav_bindlish

    i do it in java-application:

    connection.setAutoCommit(false);
    for (int i=1; i<50000; i++){
    sqlStr = "insert into...";
    stmt.execute(sqlStr);
    }
    connection.commit;
  18. Chappy New Member

    > i cant, couse it will be very big tables, queries will have many joins and it will be very slow.

    Are you just assuming this ? Big tables does not necessarily mean poor performance, but a bad design often does.

    > Concepts have many attributes (columns in database), so i create a table for every concept
    > (different concepts have different attributes)...

    But what if you were to thinkof it differently.. forget how it might look onscreen. Why cant a table in your app be a row in a 'Tables' table? Your apps 'columns' could be rows in a Columns table. With another sql table to hold attributes of those columns. With a nice compact primary key, and a clustered index, the joins in this design may perform a lot better than you think.

  19. gaurav_bindlish New Member

    Still the same question. Have you looked at the transaction log growth and %age used while executing this code?

    Another question. Why do you want to execute the whole 50000 batch as one transaction? Any speciufic reason for this?

    I agree with Chappy's workaround. See if this works for you.

    Gaurav
  20. abc New Member

    2 Chappy<br /><br />thanx, i think so too <img src='/community/emoticons/emotion-1.gif' alt=':)' /> but the main requirement is that the db was "readable"...try to consider this application as an application for creation a database schema <img src='/community/emoticons/emotion-1.gif' alt=':)' /><br /><br />2 gaurav_bindlish<br />Still the same question. Have you looked at the transaction log growth and %age used while executing this code?<br /><br />yes, the log was not inscreased and it was much free space...<br /><br />Why do you want to execute the whole 50000 batch as one transaction? Any speciufic reason for this?<br /><br />becouse it's for the data import...and the user want "either all or nothing"...if error occurs by rollback all created tables and all inserted rows should be deleted... <br /><br />
  21. gaurav_bindlish New Member

    In this case I would recommend usinf perfmon and SQL Profiler to trace the activity on the server and obsserve what happens after 50,000 rows have been inserted. Please Post the findings here as I have faced this problem before and am not able to racall the solution that I found.

    Gaurav
  22. bambola New Member

    quote:
    connection.setAutoCommit(false);
    for (int i=1; i<50000; i++){
    sqlStr = "insert into...";
    stmt.execute(sqlStr);
    }
    connection.commit;
    I'm do not really succeed to understand what you are trying to do exactly, so if I am not making sense, just ignore my post...
    I was wondering if this loop is the only way to insert the rows. you said you are importing the data. isn't there a set based way to write this insert?

    Bambola.
  23. abc New Member

    what activity exactly should I check? i'v found that a Disk Queue Length after 50 000 is very high...but i dont know why it's so...
  24. abc New Member

    2 bambola

    it's only the test, i want to use the batching later...
  25. gaurav_bindlish New Member

    Check for locks in the database and see if the disk I/O has reached the threshold.

    Gaurav
  26. abc New Member

    there is no waiting locks, but disk I/O is very high...
  27. gaurav_bindlish New Member

    In this case I suspect the disk to be the bottleneck for the application. Try running multiple tests starting with less no. and with increasing no. of records to get a feel of the trend of %disk used and increasing disk queue lenght. This way you'll be able to estimate the disk I/O requirements.

    BTW. Can you post the entire query for us to analuyze why there are so many I/O's happening?



    Gaurav
  28. abc New Member

    ok, the trigger:

    CREATE trigger tbl1_insert on dbo.tbl1 after insert as
    declare
    @ns varchar(100),
    @cname varchar(100),
    @nsid int

    declare
    table_cursor cursor for
    select name1, name2 from inserted;

    begin
    open table_cursor
    fetch next from table_cursor into @cname, @nsid
    while @@FETCH_STATUS = 0
    begin
    select @ns=ns from nstbl where id=@nsid;
    exec('create table ' + @ns + '_' + @cname + '(id int identity(1,1) primary key, name1 varchar(100), name2 int, unique(name1, name2))')

    fetch next from table_cursor into @cname, @nsid
    end
    close table_cursor
    deallocate table_cursor
    end
    end


    then i do insert into tbl1...

    ...i think the problem is that i create too many objects, if i do create table...without "primary key" and "unique", i have no disk I/O queue...

Share This Page