SQL Server Performance

I need some ideas.

Discussion in 'General Developer Questions' started by gus, Aug 10, 2005.

  1. gus New Member

    Hi there.

    I need to develop some kind of trigger that somehow sends all deleted rows to some kind of "recycle bin" to possible recover them and/or anlisys (only if there are a great number of rows deleted, let's suppose more that one )

    What I've accomplished so far is a generic trigger that detects how much deleted rows and logs in another table what database and numer of rows.

    to have some idea there is what I got so far :

    CREATE TRIGGER Trigger_FOO ON [dbo].[table_foo]


    set nocount on

    declare @c int
    select @c = count(*) from deleted

    if ( @c > 1 )

    declare @table varchar(100)
    declare @base varchar(100)
    declare @delete int

    select @table = B.name
    from sysobjects A with(nolock) join sysobjects B with(nolock) on ( a.parent_obj = B.id )
    where A.id = @@PROCID

    set @base = db_name()

    select @delete = BASEXXX.dbo.FDecide ( @base , @table , @c )

    if @delete = 0
    ROLLBACK TRANSACTION -- Don't let the delete proceed

    exec BASEXXX.dbo.spLog @base , @table , @c , @delete



    I'm doing a basic logging right now. (database, table name, count(*) of deleted and what was the function "decision" as to delete the rows or not ).

    The idea is having the SAME trigger applied to all tables.
    That's why I get the table name through @@PROCID and sysobjects.

    I'll be better if I can put some generic trigger to all deletes of all table, but I don't think this is possible.

    (I already KNOW how to do it for some pre-established specific table)

    what I wanna do is somewhow ( with dynamic SQL I guess ) is go thru all rows and columns of deleted and sent then to some repository or recycle bin (maybe as a long string of comma delimited values or something like that).

    I don't figure out yet how to do it generic.
    And i don't think I can access deleted virtual table from inside synamic SQL (I'll be out of scope ).

    any ideas?

    Thanx guys.
  2. Adriaan New Member

    Why not have an "archived" BIT column, and use that to filter the data? A "restore" would be simply to unmark the column on the record.
  3. FrankKalis Moderator

    To me this kind of approach is highly questionable. You can't have a generic trigger for all tables. You would need to create a trigger for each table separately, which can easily be done via a template, btw. Do you really want to log each and everything? Who decides in the end if rows are deleted or not? Can't you somehow prevent this in the calling app? I mean, make there records read-only that are not to be deleted?

    Frank Kalis
    Microsoft SQL Server MVP
    Ich unterstütze PASS Deutschland e.V. http://www.sqlpass.de)
  4. gus New Member

    Let me explain further.

    I have no control over the tables or the delete sentences.

    We are mirroring sysbase tables on QNX unix-like operating system.

    We have duplicated the tables in SQL server and "mimic" their inserts/updates/deletes executing the same sentences they execute on sybase.

    The thing is, for a space problem they sometimes execute things like " delete from foo where time_t < number ). " to delete older data.
    As they are on unix-like they usually use time_t int columns, but that's irrelevant.

    The important thing is that I execute the same sentences as them (and I have no control over them. Of course, I can parse them before execute them over SQL, but that would be a headache).

    Hope it's a little more clear now.
  5. Adriaan New Member

    Is there any manipulation of the data (insert/update/delete) in your SQL Server database independent of that in the Sybase database? If not, then it may be easier to delete everything in the SQL Server database, and fill it up again with the Sybase data.

    Make sense?
  6. gus New Member

    I guess I'm not making myself clear.<br /><br />Sybase data is production data and "source data".<br />It's deleted every once and then for performance reasons, to keep tables small.<br /><br />SQL data (replication or "mimic" data) feeds fact tables and then OLAP data.<br />It shouldn't de deleted and must be kept for historical reasons and because we all know how powerfull is our beloved SQL and it doesn't have any size restriction <img src='/community/emoticons/emotion-1.gif' alt=':)' /> like production sybase.<br /><br />So the idea is let sybase delete or purge their data and we keep that in our SQL.<br />
  7. FrankKalis Moderator

    Sorry, although it's a bit clearer now, I'm struggling to find a sense in what you are doing there.
    Let me see if I understood you so far.
    You are running Sybase on Unix.
    This is the live database where the action happens.
    Then you have SQL Server with an exact copy of the database and the tables.
    And now you want to be able to repeat every action that happens to the Sybase data on the SQL Server data? Right?

    Why? Can't you simply create a linked server to Sybase? That way you would always have the most current data. See, if this helps:http://support.microsoft.com/default.aspx?scid=kb;EN-US;280102

    Hm, maybe here you'll find something interesting, too:

    Frank Kalis
    Microsoft SQL Server MVP
    Ich unterstütze PASS Deutschland e.V. http://www.sqlpass.de)
  8. FrankKalis Moderator

    Aah, after you've edited your post, it's actually much clearer. [<img src='/community/emoticons/emotion-1.gif' alt=':)' />]<br />What about setting up the linked server, take an initial snapshot of the Sybase data and then periodically query Sybase for newly inserted or updated data?<br /><br />--<br />Frank Kalis<br />Microsoft SQL Server MVP<br /<a target="_blank" href=http://www.insidesql.de>http://www.insidesql.de</a><br />Ich unterstütze PASS Deutschland e.V. <a target="_blank" href=http://www.sqlpass.de>http://www.sqlpass.de</a>) <br />
  9. Adriaan New Member

    Okay, I can see what you want to use SQL Server from: not only the current data from the production system but also historical data, and you can't hold the historical data in the Sybase database due to some external restriction.

    Another complication seems to be that there will some overlap between the historical data and the production data.

    I wouldn't bother with replicating every detailed insert/update/delete, but decide on a point in time when the Sybase data reaches the end of a production "cycle" and say: This is the chunk of data that I want to move to SQL Server.

    At that point, you can delete all the matching records from the SQL database (i.e. the overlap), then add the complete data from the Sybase database. Finish off by deleting "finished" data from the Sybase database ...

    Well, something along those lines should work.
  10. gus New Member

    I guess I am mileading you a bit off the track.

    There are a lot of thing I can't change right now.
    The huge beast is up and running.

    For example sysbase is running on QNX 4.25. Very old sybase.
    I can't use linked servers and that sort of stuff. They don't use TCP/IP protocol for performance reasons.
    Take my word for it. It could be done with TCP/IP enabled, but that's not possible.
    So bear with me in this one. Forget the very high tech solutions.

    I'm restricted to create the very same tables as them.
    Fill'em with their data in a point of time.
    And from then on, "mimic" their movements executing here the same sql sentences as them there.

    That's how it is. It's running fine and well that way. And it can't be changed now.

    But we're losing the initial focus here.

    The thing is, in certain conditions I gotta save the deleted rows to some "recycle bin".

    I'm about to do something like this in the trigger with dynamic SQL :

    insert into 'TBL_RECYCLE' + @nro
    select * from delete

    and then keep track the numbers ( @nro ) in another table.

    but as I can't access deleted virtual table from dynamic-SQL , I guess I will have to insert them in a #tmp table.
    and then from that #tmp table with dynamic sql send the rows to 'TBL_RECYCLE' + @nro

    Any better ideas than mine guys? (sure you have).

    It would be great if I can pass the deleted virtual table from the trigger to a store procedure. And then do the work inside the sproc.
  11. Adriaan New Member

    Why not insert from deleted and inserted straight into TBL_RECYCLE, being a permanent staging table with a column for the "nro" value?

    You then create a job that runs an SP every X minutes that will convert the data from the staging table into the actual history table, then clean out the processed data for that "nro".

    Still: unless you must be able to reproduce the status quo of the Sybase data at every possible point in time, using only the SQL Server data, I can see no reason why you would have to mimick every single move in the Sybase system.
  12. gus New Member

    I have to do the insert into TBL_RECYLE sort of dynamic because remember there are a lot of tables I have to keep track with different structure each of them.

    And I want the same trigger applied to all of them.
    And I don't want trouble when I add/remove some column from some table.

    That's why I can't send the deleted rows to the same table.

    And I have to mimic sysbase executing their strings because it's the only way.
    I got no TCP/IP connectivity. Forget linked servers, OLEDB, ODBC ,etc ,etc.

    It would be helpful to be able to pass the deleted virtual table to a sproc as a table variable....damm.. I wish I could do that...
  13. Adriaan New Member

    Okay, starts to make more sense.<br /><br />In the past we had an audit trail module in our client app that could create/alter triggers on-the-fly. It read the column names and data types from syscolumns, then created the trigger scripts. For each column there would be a separate variable, and you basically couldn't cover TEXT or NTEXT columns, and I think there was a problem handling batch operations.<br /><br />This kind of stuff really is dynamic SQL it its worst, so you're in for a few headaches - ah well, you already have plenty of those by the sound of it ...[<img src='/community/emoticons/emotion-2.gif' alt=':D' />]
  14. gus New Member

    I particulary don't like generated T-SQL. But maybe it's just me.<br />I rather prefer dynamic T-SQL but al least generated inside T-SQL.<br /><br />Hehe, you have just said it... lots and lots of headaches <img src='/community/emoticons/emotion-1.gif' alt=':)' /> <br />
  15. Adriaan New Member

    You can't really use dynamic SQL in triggers (to this degree anyway) unless performance is no issue. There may also be some 3rd party software that you could look into - check the section here on the site.
  16. gus New Member

    Ummm I didn't think about performance about dinamic T-SQL inside triggers.<br /><br />My small tests on test tables have succeded of course.<br /><br />Now that you mention it, I don't know how it will perform on large scale.<br /><br />Anyway deletes are not so frequent, and the cases when the select count(*) from deleted will be large (when I will send to RECYCLE) are less often yet. In fact very rare. When the sysbase guys perform "table depuration".<br /><br />I'll test it.<br /><br />Of course I'm no dynamic T-SQL fan (also hate cursors and stuff), but sometimes.... damn it...there's no way to get around it... <img src='/community/emoticons/emotion-5.gif' alt=';-)' /><br /><br />( This is a hot topic now!!! )
  17. Adriaan New Member

    One vast improvement will be if you stop using

    select @c = count(*) from deleted
    if ( @c > 1 )

    and replace that with

    IF EXISTS (SELECT <your_id_column> FROM deleted)

    The reason is that with EXISTS the query ends as soon as one row is found, whereas with COUNT(*) all rows must be processed. Also use EXISTS when checking for matching rows in other tables, compared to "deleted" and "inserted", and focus on the inserted and deleted tables to reduce the number of rows being processed, like so:

    IF EXISTS (SELECT id FROM inserted WHERE NOT EXISTS (SELECT key FROM OtherTbl WHERE key = inserted.key))
  18. gus New Member

    The thing is I NEED to know the number of rows deleted. Not just if there were deleted rows.<br /><br />I need this number to later decide what to do.<br /><br />If for example this number is small, below some magic kinda arbitrary number (for exaple 30 or 40 ) I let the delete proceed.<br />Else (it's greater) I suppose it's a depuration and precvent the delete to continue (ROLLBACK).<br /><br />It's a crude method ( I know I'm a prehistoric beast <img src='/community/emoticons/emotion-5.gif' alt=';-)' /> , but I found no other solution so far to prevent "lower level" depurations.<br />
  19. Adriaan New Member

    The number of rows updated/deleted ... simple!

    Before you do any other SELECT/INSERT/UPDATE/DELETE in your trigger, you can read @@ROWCOUNT to get that information:


    After any SELECT/INSERT/UPDATE/DELETE statement is executed, @@ROWCOUNT reflects the number of rows - that's why you must check it at the very start of the trigger, and store it in a variable for use further on in the trigger.

    Another point of interest is that @@ROWCOUNT is available even if you have used SET NOCOUNT ON.

    Finally, if an UPDATE/DELETE trigger fires because of a cascading update/delete from a different table, then "deleted" and "inserted" will both be empty. I would assume that in that case @@ROWCOUNT will be 0 as well, but you'll have to check that.
  20. gus New Member

    You're right!<br /><br />Sure I'm a smartass but I always forget something <img src='/community/emoticons/emotion-5.gif' alt=';-)' /><br /><br />Damn, I gotta automate someway the triggers update.<br />I got lotsa triggers to change now with your improvement... :-(<br />
  21. Adriaan New Member

    Sorry, but the position of resident smartass is already taken.[<img src='/community/emoticons/emotion-2.gif' alt=':D' />]

Share This Page