We have an application in development which will make use of simple database table. The primary key (TableID) on this table could be done in two ways. A) BIGINT IDENTITY B) BIGINT with value being generated by application. In A) we would have another field in the table of SomethingID that would be a foreign key to another table. In B) the SomethingID would be incorported in the Primary Key as the highest 8 bits of the number. Clearly the two methods have very different performance considerations both in terms of SQL and the application. Leaving aside all of the index setting issued I have a particular problem. There is a need to return all records for a particular SomethingID. So we would have either A) SELECT Stuff FROM Table WHERE SomethingID = Value SomethingID has a non-clustered index on it B) SELECT Stuff FROM Table WHERE TableID BETWEEN Value AND Value Now I've populated my sample tables with 1 million records each. I have 3 sample tables with differing fill factors on the primary key index for B) and a single sample table for A) My sample queries for each of the 4 tables are set to return the same number of rows (100K) and I've turned on discarding rows in query analyzer. The Execution plans in Query Anaylzer show that A) has a 73% cost relative to the batch and B) has 9% for each of the three tables. Turning on a trace in Query Analyzer shows similar durations and cpu usage but the reads are very different. A) has 20 reads but B) has 200 reads for each table. My profilier trace shows completely different results. A) has 4400 reads but B) has 1200 reads for each table. The cpu and durations values are wildly different too but at least change to the same degree. What's going on here? I've been using the profilier values for my other tests which are 1. SELECT a single row for TableID (primary key) 2. UPDATE a single row for TableID (primary key) and they are showing much lower cpu for A) than B) but other than that durations and reads are the same.