I created a view to help me out in displaying the data a bit cleaner. This view convert the fields into varchars of specific sizes, so that they are displayed in a standard fashion in various reporting locations. The view that I have used is as follows:
These stored procedures started out living in the master db, and got named sp_ for that reason, so they could be called from elsewhere. But as the project grew, I moved them to an Audit database, but failed to rename them. There are other naming issues I have noticed along the life of this system. In places I name objects ‘Modification’ and in others ‘Modifications’. If I had the gumption to go back and redo it all, I would. But seeing how it is now in production, I lazily leave it the way it is. Take this chance to update it yourself.
This proc will start up the trace on the remote monitored server. There are a load of parameters to the proc that allow for a lot of customization. A name will be created for the file, based on the Servername and a Datetime stamp. History is created when the trace is started, so we can track the individual trace file status. We call the Trace system stored procedures. Once all the events have been set and the columns have been set, we set any filter values in the table [AuditDataModificationFilterConfig]. There are also some specific hard coded filtered values in the proc. Once this has all been done, the trace is ready to start, and we call sp_trace_setstatus to get it going. The trace will typically run for 1 hour, and quit. A new trace will start at that time from a job that will be described later.
This procedure is used to start the entire lot of enabled servers you want to monitor. It will cycle thru the tables [AuditDataModificationsConfig] and [AuditDataModificationsClientConfig], and find all the enabled servers, based on the client param you pass in, or it will simply get them all, if no client param was used. After getting each [Servername], it will call [sp_StartDataModificationsTrace] and start the traces, as described above. History is added to the [AuditDataModificationTraceHistory] table.
Hourly, we will be not only starting a trace, we will be processing the traces that have previously run. We start this process by looking into the table [AuditDataModificationHistory] and we find the records that were not imported yet. We can do this for all servers, or specify a ServerName. We grab the filename, and path from the history table, and check for file existence prior to loading the trace file. If the file exists, we update the History table with a status indicating we are importing the data. We then use the function ::fn_trace_gettable to import th data from the trace file. This data gets loaded into a staging table first, then into the table [AuditDataModificationDetail]. As errors occur, we will note this in the [Imported] status field in the history table. If no errors occur, we simply update this status field at the end, letting all know that it was successfully imported.
In our case, we copy these trace files to multiple locations for other processing. A copy is sent to a central clearing house for reporting. Another copy is sent to a third party within our group that reviews our actions, mainly looking at the DBA’s actions.
As with the above proc that starts all the traces, this proc will process all, cycling thru the available servers by looking for enabled Servers in the table [AuditDataModificationsConfig] and calling the above proc [sp_ProcessClientDataModificationTraces] to get things going. This allows for easy calling of 1 proc from a job to get all things going.
This proc allows me to process all the traces for a particular client. I can pass in a Client and ServerType as params, and it will cycle thru all available records from the table [AuditDataModificationConfig] and calls [sp_ProcessDataModificationTraces] for each client. This is the proc that is used from within the job that allows me to specify a subset of servers to process, allowing the job to continue if a particular Client has issues, so that the entire job does not fail.
This stored proc allows you to customize the items that you would like to be purged from the trace file, and not stored on the db server. Now, this seems a bit hypocritical to purge data that shows modifications, for an audit… but hear me out, there is a valid reason. Since the trace will pick up all actions that occur for certain events, and it is indiscriminate at which actions it picks up, some of these may be acceptable to you. Some actions that occur, you may want to ignore, as they have been proved to be safe actions. You can remove them at this stage, and not have false positives in your reporting. For example, any time a web user updates a table, that event will be captured and stored. However, since this is a non issue, I do not want to store these actions. You may find others as well in your organizations. Add these here, but be careful that you do not purge too much valid data.
The purpose of this stored proc is to cycle thru all enabled records in the table [AuditDataModificationEMailConfig] and calls the next proc, which will email the necessary individuals information about this server and its actions. From the config table, we pull vital information to send to the next proc. This uses a cursor to process thru the records, and allows for a single execution of this proc from a job, to satisfy the entire set of enabled servers to have their information emailed out.
This proc will create a formatted email message, inserting the values of the params passed in, and send the email out to the intended parties. The params for this are as follows :
@CCRecipients varchar(4000) = null,
@BCCRecipients varchar(4000) = null,
@Query nvarchar(4000) = null
The Recipients params are self explaining. The Servername will be used to display which server this data report refers too. The date is the time for when the report was ran. The ReportType is a description set in the config table, explained above. The query is what you supplied to produce this data. If no results are returned, no attachment of data will be in existence in the email, and the email will indicate there were no modifications. If modifications did occur, and data was returned from your query, this data will be attached in simple ascii format to the email.
This isn’t the best way, or only way to report on this information. But when I created this, and supported just a few servers, it was easy and did the trick. Later, I dismissed these procs, and used Reporting Services and subscriptions to send me data. Now, this task has been outsourced to a third party that handles it differently.