SQL Server Performance

Profiler Trace Different to Query Analyzer Trace

Discussion in 'T-SQL Performance Tuning for Developers' started by Dave Wells, Mar 5, 2003.

  1. Dave Wells New Member

    We have an application in development which will make use of simple database table. The primary key (TableID) on this table could be done in two ways.

    A) BIGINT IDENTITY
    B) BIGINT with value being generated by application.

    In A) we would have another field in the table of SomethingID that would be a foreign key to another table.

    In B) the SomethingID would be incorported in the Primary Key as the highest 8 bits of the number.

    Clearly the two methods have very different performance considerations both in terms of SQL and the application. Leaving aside all of the index setting issued I have a particular problem. There is a need to return all records for a particular SomethingID. So we would have either

    A)

    SELECT Stuff
    FROM Table
    WHERE SomethingID = Value

    SomethingID has a non-clustered index on it
    B)

    SELECT Stuff
    FROM Table
    WHERE TableID BETWEEN Value AND Value

    Now I've populated my sample tables with 1 million records each. I have 3 sample tables with differing fill factors on the primary key index for B) and a single sample table for A)

    My sample queries for each of the 4 tables are set to return the same number of rows (100K) and I've turned on discarding rows in query analyzer.

    The Execution plans in Query Anaylzer show that A) has a 73% cost relative to the batch and B) has 9% for each of the three tables.

    Turning on a trace in Query Analyzer shows similar durations and cpu usage but the reads are very different. A) has 20 reads but B) has 200 reads for each table.

    My profilier trace shows completely different results. A) has 4400 reads but B) has 1200 reads for each table. The cpu and durations values are wildly different too but at least change to the same degree.

    What's going on here?

    I've been using the profilier values for my other tests which are

    1. SELECT a single row for TableID (primary key)
    2. UPDATE a single row for TableID (primary key)

    and they are showing much lower cpu for A) than B) but other than that durations and reads are the same.
  2. Dave Wells New Member

    I should point out that in A) SQL isn't using the index on SomethingID, instead it's using the Primary Key index. I've just tried forcing it to the use the index on SomethingID and that's very bad.
  3. bradmcgehee New Member

    When you are referring to reads, are these physical or logical reads?

    I am not sure how to respond to your specific questions, but here are some general recommendations that may or may not apply to you. Let me know how close these come, and them perhaps I can offer more specific advice once I better understand what is going on.

    Whenever you have to return large amounts of data, the SQL Server optimizer will almost always perform a table scan or index scan, and not use an index. This is because overall, it is more efficient to scan large amounts of data than have to scan a huge index and data pages. If you have the choice, it is more efficient to return large amounts of data from a clustered index (this is what is happening in your case) than a heap or from a clustered index.

    Generally speaking, it is better from a design standpoint to use a Primary Key that is meaningless instead of attaching some meaning to it.

    ------------------
    Brad M. McGehee
    Webmaster
    SQL-Server-Performance.Com
  4. Dave Wells New Member

    That was it. I was looking at different reads in the two types of traces.

Share This Page