As a DBA, you can find very useful information in the Windows event logs. About important events, the health of your SQL Server and the operating system it runs on. Unfortunately, the logs also contain a lot of useless information. Some applications have a tendency to log hundreds of events every day, filling up the logs very quickly with info that you, as a DBA, do not need. But you still need to see that important message that informs you the server is going to crash if you don’t take action.
Even if your company doesn’t have the tools to automate this task (like Openview or Operations Manager), you can still get these important messages without clicking for hours every day to get to the important stuff. How? This article describes how I do it.
In short, a free tool dumps the events in a text file on a central server. Then SQL Server imports this text file into a central management database. From there we delete every message that we don’t have to see.
All you have to do is execute one stored procedure. This way, it takes just 5 minutes to collect all event log messages from the 50 SQL servers in our company. Now, lets go into the technical details.
To follow these steps, you will need:
– a central database on SQL Server (these scripts have been made on SQL 2005, but should work on older versions as well)
– permissions to remotely read the event logs and to perform xp_cmdshell on the SQL server
– DumpEvt 1.7 (you can download this at the web site referenced below) installed in d:appsdumpevt on the central database server.
Whether the scripts provided in this article will work ‘as is’ will depend on details like regional settings, so maybe you’ll have to adapt the script or the dumpevt.ini. Just remember that the end result will be worth it…
Step 1: Prepare the central database
First, let’s create a table that lists all the servers we want to check:
CREATE TABLE tblServers (
We’ll only look at the servers with a status of ‘active’ (actually, the tblServers in my central database has a lot more colums that are used for other tasks, but we won’t be using them now).
Next, let’s create some staging tables:
CREATE TABLE [dbo].[tblLogStaging](
We’ll use this staging table to filter the messages and concatenate the fragments of the description before loading them into the destination tables. Let’s create these tables next:
CREATE TABLE [dbo].[tblApplicationLog](
Create a table tblSystemLog using the same script.
You may notice that there is no table for the security log. The reason for this is that personally I’m not interested in network security, just SQL security. Login failures for SQL are logged in the application log (if you enable failure login on the SQL Server instance, that is).
For the same reason, there are also no tables for the log files that are present only on domain controllers: Directory Service, DNS Server and File Replication Service. However, you could easily modify the scripts to import these logs as well.
Also, in the stored procedure below we’ll start off by cleaning out the destination tables. This is because we’re only interested in new events. Since DumpEvt doesn’t remove events from the event log on the source server, there is no reason to keep them in the central database also. Keeping a history of older events in some sort of archive is beyond the scope of this article.
Feel free to use this procedure as a starting point for your own event log processing needs. Maybe you can write an article about it so everyone can benefit…
Step 2: Creating the text file
To export the event log into a text file, we use a free tool called DumpEvt. This is installed on the management server in d:appsdumpevt. Here, it will create the csv file. By default, DumpEvt will append new event log messages (if any) to the output file. This way, we could dump all the events from all servers first, and then import them into SQL. The downside is, that the output file could become very large. To prevent this, we first dump the events from one server, import them into SQL, and then move on to the next server. So, we want to make sure we start with a fresh output file for every server. Simply deleting the output file won’t do; if there are no new events in a log, DumpEvt will not generate an empty output file, so we’ll receive an error when trying to import the file into SQL (right click on the DumpEvt directory, choose New > Text Document and rename it to Empty.csv).
To copy the empty file we’ll use the following code in the main stored procedure:
EXEC master..xp_cmdshell ‘copy d:appsdumpevtempty.csv d:appsdumpevtevtlogdump.csv /Y’, no_output
The /Y switch will suppress prompting to overwrite the file if it already exists. The no_output argument specifies that SQL will not display the output from the command shell (which should be: 1 file(s) copied). Remove this argument for troubleshooting purposes.
Next, we’ll start with the application log of the first server. We can simply call DumpEvt from the command line:
SET @sql = ‘master..xp_cmdshell ”d:appsDumpEvtDumpEvt.exe /logfile=app /outfile=d:appsdumpevtevtlogdump.csv /computer=’ + @server + char(39) + ‘, no_output’
For an explanation of the syntax and other options, take a look at the help file of DumpEvt. The string function char(39) returns the single quote; this is used to dynamically build the SQL statement without syntax errors. Just replace EXEC with PRINT and you’ll see that the final SQL statement looks like:
master..xp_cmdshell ‘d:appsDumpEvtDumpEvt.exe /logfile=app /outfile=d:appsdumpevtevtlogdump.csv /computer=YourServerName ‘, no_output
Every time the inner loop is performed, YourServerName will be a different server of course. Further on, we’ll incorporate this code in the stored procedure that does the actual work (step 5), but you’ll want to test this first. Replace @server with a server name to see if everything works as expected.
The first time you use DumpEvt to read an event log, it will read the complete log from start to finish. This may take a while, especially over a WAN. DumpEvt saves the number of the last read message in the registry (on the management server, not production) and next time will only retrieve new messages. This is the default behavior; if you want to see all events from the log, use the switch /all. This can be useful for troubleshooting, but don’t use it for day-to-day event log monitoring.
Step 3: Importing the text file
This is the easy part:
DELETE FROM tblLogStaging
BULK INSERT tblLogStaging FROM ‘d:appsdumpevtevtlogdump.csv’
WITH (FIELDTERMINATOR = ‘,’)
We empty the staging table, then use bulk insert to import the events into this table. There are two reasons for using a staging table:
1) The description of the event is represented in 6 fragments in the output file. We’ll have to concatenate these fragments for easy reading.
2) The amount of events from a single log can become quite large. It is faster to delete uninteresting events as soon as possible.
Which brings us to the next step: