Trigger Counter | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Trigger Counter

Hi,<br /><br />Firstly, apologies if what I ask here is not in correct thread or appears rather "beginner-ish". I am new to all this SQL stuff… [:I]<br /><br />I have a table of which rows are added to it throughout a working day. When a row is added we have a process that picks up a message string in a column from the row and acts accordingly and the row is deleted. Basically a FIFO type scenario.<br /><br />I am very interested in obtaining a counter on that table which will tell me how many rows have been added to it during several periods throughout the day. i.e. after 1hr, 3hrs, 10hrs, etc.<br /><br />I have been looking into adding a Trigger to the table and when an INSERT is done, increment a counter.<br /><br />I have been looking into this and using "sp_user_counter" could be an answer or could it?<br /><br />As this table could have anywhere between 10 – 1,000,000 rows added a day, I am not sure whether this would have an impact on performance.<br /><br />If what I am saying is making sense to you, do you think the Trigger and sp_user_counter approach would work?<br /><br />If not, what would be the best approach to do this without any performance impact?<br /><br />I am not t-sql literate so any code snippets would be appreciated. [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />Thanks in advance for any help provided.<br /><br />Regards,<br />SCG.<br />
Any specific reason why you need such data? Do you have any specific performance issues at present? Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. Knowledge is of two kinds. We know a subject ourselves or we know where we can find information on it.

I work in a Support environment and the reason is purely statistical in that all I would like to know is the number of rows that have been inserted into a table during random periods of the day. This in turn could tell me quite a few things about how our Customer’s are using our software: peak times, throughput, etc. The number of times I require this value could be either once, twice or up to ten times a day. It would be good to call a store proc from the command line say, that would return me the total number of rows that have been inserted into a given table. I am not interested in the number of rows in the table at the given time which could be achieved by using COUNT. I can see something along the lines of 2 store procs: reset the counter and obtain the count value. There would be a counter behind the scenes keeping a rolling count of when a row is added to a table. What I am stumbling on is: how the counter can be implemented, is there an already predefined store proc or function call I could use that already does this, or is this a bit more tricky to implement?! There is one issue in that I can’t change our developers code, so that when he performs the INSERT into the table, update a counter then. There are currently no Performance issues to mention. Again, thanks in advance for any advice, code snippets, etc provided if possible.

ump Would I be correct in an assumption that this is not an easy problem to resolve and would require a lot of investigation and effort to get working?
If data is only being inserted (never deleted) you could create a SP that does a SELECT COUNT(*) on the table, and stores the result in a log table with the datetime of the sample, along with the difference compared to the previous log entry (if you have to calculate it afterwards, performance can be pretty bad). You call this SP from a job that is scheduled to run at the required interval.
Thanks Adriaan. Unfortunately, data will be deleted from this table. The top row is processed then removed from the table. This table uses a FIFO approach to handling messages.
Why not add a BIT column to flag the top row as ‘deleted’, and keep it. Then your SP can do the count, and delete all flagged rows.
Don’t you think you may be wasting database space by keeping those unwanted rows and for the sake of count triggers – deleted table will give some help here as it stores copies of the affected rows during DELETE and UPDATE statements. We have similar approach in finding number of rows affected on a particular financial database, all we do is to count the deleted table rows whenever such update occurs. I know this is not good on the performance basis, as each trigger operates within the same transaction that the statement that caused the delete, insert or update is in. So we keep the transactions for Update & delete to a minimum of 100 rows every time.
Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. Knowledge is of two kinds. We know a subject ourselves or we know where we can find information on it.
Good point. But there will be a trade-off, as deletions are time-consuming – if you delete a row for every insert action, then wouldn’t there be a point where a batch delete will take less time? My proposed flag would not help there, as updates are as time-consuming as deletes. Why not copy @@rowcount from the insert trigger into a log table? That would be the cleanest option.
Agree, the deletes will have that performance lag provided the table is heavily fragmented and in this case having such minimal transaction will help a bit far. I think @@rowcount from insert trigger will have same sort of performance as compared to delete. Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. Knowledge is of two kinds. We know a subject ourselves or we know where we can find information on it.
Thanks for information so far guys – you got me thinking…. I had a look around and came across "IDENT_CURRENT". What do you guys think about using this to tell me the value of the table at any one time, store this value, then call it again at a later time. I would then take the difference between the two and this would be my "number of rows" added between a certain time? i.e. call – select IDENT_CURRENT (‘table1’), which returns the value 10 (store this value)
1 minute later call – select IDENT_CURRENT (‘table1’), which returns the value 20 (store this value) Then, do a difference on the two values to get the number of rows added to that table. Do you see any problems with this approach at all?
Any insert that is cancelled (could be an insert trigger issuing ROLLBACK TRAN) will still have increased the identity counter. Make sure you set your "counter" trigger as the last one for insert, if there is more than one trigger for insert.
]]>