Audit Data Modifications

System Overview
On the Monitor Server, there is an Audit database.  Within this database, there are various tables, a few jobs, and a few stored procedures.

This is the main table containing data gathered from the Trace files from other servers.  The data will sit here and be reported on.
The fields in the [AuditDataModificationDetail] table are as follows.
This information is the core data we retrieve from the server that experienced the Data Modification.  It will provide ample information for you to investigate the infraction.  These are all fields that are available to be traced for the Data Modification events I will describe later.  They should be self explanatory.  If not, BOL can describe them.  The ServerName field is important to maintain, so when you monitor multiple servers you can know where the breach occurred. 

As each trace file is created, on each monitored server, I log this into a history table.  This data is used to keep track of the status of the Trace file, and to document, historically, the traces performed.  This information can be used to troubleshoot issues later.
The fields in the [AuditDataModificationHistory] table include the following:
The [AuditDataModificationHistoryID] field is an Identity to make the row unique.  The [TraceFile] indicates not only the name of the trace file created, but the path it was created in.  There is an [Imported] field that signals the state of the Trace File.  The [DatabaseServerName] field indicates which Monitored Server we are watching.  As a file is created, the initial value of the [Imported] field is set to 0 (zero), indicating it was created.  When the file is successfully imported, it is set to 1.  Other values will indicate various errors that occurred during processing.  See the stored procedure [sp_ProcessDataModificationTraces] for more details.

To keep track of the last run status of a Monitored Server, I created a simple table with the following fields.
These let me indicate the ServerName, last run date, and the status of success or failure of the last run.  This is rarely used, but does come in handy when I need to spot check the system.  If I think that a trace has not fired off appropriately, I can reference this table for the last point in time occurrence of a trace, and determine if it’s running or not.  I consider this a cheating way to quickly monitor status.

This contains a ServerName field and an Enabled flag.  These two simple fields allow the system to know if a Server is enabled to be processed by this system.  Simple.  Select the Enabled records, and viola! You now have the result set of servers to process.
This is an addition to the system that allows me to group servers by a Client and Type of Server.  At one point, I had trouble with the job failing that processes the servers part way thru, because of a linked server failure.  This would cause all subsequent servers to not be monitored.  This table was introduced to group servers together.  The fields of this table are as follows:
This table allows me to group a client’s servers together, and further subgroup them by ServerType.  For example, I have a client with three types of servers, Workflow servers, Mailroom servers and Dataentry servers.  I can call each of these separately to be processed, by Client and ServerType.  Or I can call them to be processed all at once, by simply choosing the Client parameter, leaving the ServerType blank.  Multiple clients can be called separately.  Or I can call all servers to be processed by leaving off both parameters.  These features allowed me to setup multiple steps in the processing job (described later) and continue to the next step on failure, allowing better processing.
This table allows me to setup multiple reports to be generated to needed parties.  In most cases, I have narrowed it down to 2 groups of modifications.  Those modifications that were performed by the DBA and those not performed by the DBA.  The table contains the following fields:
For each [ServerName] I can have multiple [ReportType].  If this is an [Enabled] record, it will be processed by a job described later. The rest of the fields describe the type of email report that is being sent out.  To whom it will go is indicated in the Recipients fields.  The last field is the [Query] field, wherein you will write the specific sql to pull out the criteria you want.  I have hard coded parameters for date into mine that will process a specific datetime range.  A sample sql statement is below.

select *
  from Audit.dbo.vAuditDataModificationDetail
  where StartTime Between @DateFrom and @DateTo
    and LoginName in  ( ‘DOMAINTJay.Belt’)
    and ServerName = ‘ServerName’

As you can tell from the above query, I am only looking at a certain rate range.  Then, I further filter it by Login name, in this case, my domain login name.  Add as many of these names as your DBA team supports.  Then I am specifying the [Servername].  This should produce all modifications that I have made within the timeframe on said server.  As you can imagine, the combinations are endless to what you will want to monitor. 
In our case, we basically want to sanity check what the DBA’s have done, and what anyone else may have done.  So those are the two queries we use against all the monitored servers involved.  This way we know what the DBA’s have done, and what anyone who may not be authorized to make modifications has done.

This table allows me to configure a given Servername with specific things to look for. 
In most cases, it is the same types of filter items we look for.  But the flexibility of this allows you to pick and choose items for each Servername.  This table contains the following fields:
This let’s me filter by each possible [ServerName].  You will notice that I cannot spell Column, and I honestly didn’t notice this for years.  Please change this in your system.  The [ColumID] is relative to the trace columns available.  Refer to BOL for more detail on these.  The [LogicalOperator] is a parameter in the system stored proc [sp_trace_setfilter].  As it says in the BOL, ‘Specifies whether the AND (0) or OR (1) operator is applied. logical_operator is int, with no default.’  The [ComparisonOperator] specifies the type of comparison to be made.  The [Filter] value is what we are truly filtering on.  Some examples are as follows:
 The most important ones are the DML statements below
%Insert %
%Update %
%Delete %
 Other examples are
%DTS Designer%
%.Net SqlClient Data Provider%
Many more filterable items can be used here, allowing your specific needs to be met in your systems.  I always included a web service account that we deemed safe to make modifications, so we would filter it out, and ignore those changes.

There is also an [Enabled] field, allowing you to turn on and off these filters. 


Pages: 1 2 3 4


No comments yet... Be the first to leave a reply!

Software Reviews | Book Reviews | FAQs | Tips | Articles | Performance Tuning | Audit | BI | Clustering | Developer | Reporting | DBA | ASP.NET Ado | Views tips | | Developer FAQs | Replication Tips | OS Tips | Misc Tips | Index Tuning Tips | Hints Tips | High Availability Tips | Hardware Tips | ETL Tips | Components Tips | Configuration Tips | App Dev Tips | OLAP Tips | Admin Tips | Software Reviews | Error | Clustering FAQs | Performance Tuning FAQs | DBA FAQs |