Thank you for posting the excellent quiz athttp://www.sql-server-performance.com/quiz_start.asp. (I'm planning to take the SQL Server db development exam soon. I scored only 15 of 20, so I'm no expert!) In one of your answers, you stated that "all Transact-SQL code should be incorporated into a stored procedure and called using the ADO Command object." We use stored procedures with ADO, but I prefer the simpler, more compact syntax using recordsets or connection objects directly: cn.Execute "spTableUpdate " & strParamList or rs.Open "spTableSelect " & strParamList, cn [For safety from SQL injection attacks (not a serious threat on an internal client-server ap, where all table access is strictly through stored procedures) we wrap parameters in a function that replace any single-quotes with a pair of single-quotes.] Can you tell me why so many code examples use long-winded syntax of setting up Command objects with ADO parameters? Other comments: In question 14, you state, "Since you can only choose one clustered index per table, it is worth the extra time to select a column that will take the best advantage of using one. One of the best places to use a clustered index is on columns that are subject to many range queries, such as the [age] column in this question. In this particular example, the other two queries are not range queries. Assuming that there are no other aspects to consider, then making a column that is subject to range queries a clustered index is often the best choice." It might be worth mentioning that clustered indexes are embedded in all the non-clustered indexes, so small candidates are better than large candidates -- tinyint is ideal. In one question, you used an example of de-normalizing, where my first suggestion would have been an indexed (materialized) view. I suspect an indexed view is probably a lot less disruptive to the application (cheaper to implement), but wouldn't be as effective as denormalizing. In question 19, you show a trigger that selects into variables for insert or update: select @min_lvl = min_lvl, @max_lvl = max_lvl, @emp_lvl = i.job_lvl, @job_id = i.job_id from employee e, jobs j, inserted i where e.emp_id = i.emp_id AND i.job_id = j.job_id IF (@job_id = 1) and (@emp_lvl <> 10) begin raiserror ('Job id 1 expects the default level of 10.',16,1) ROLLBACK TRANSACTION end ELSE IF NOT (@emp_lvl BETWEEN @min_lvl AND @max_lvl) begin raiserror ('The level for job_id:%d should be between %d and %d.', 16, 1, @job_id, @min_lvl, @max_lvl) ROLLBACK TRANSACTION end How will this work if multiple rows are inserted or updated in a single statement, and the out-of-spec rows were hidden after the first one? Wouldn't you want to: IF EXISTS (SELECT 1 FROM inserted WHERE job_id = 1 and job_lvl <> 10) ...<raise error> Shouldn't we implement this rule with a check constraint instead of the trigger? The second check is a little trickier, since it involves a JOIN to the Jobs table. I would suggest: IF EXISTS ( SELECT @emp_lvl = i.job_lvl, @min_lvl = j.min_lvl, @max_lvl = j.max_lvl FROM inserted i JOIN jobs j ON j.job_id = i.job_id WHERE i.job_lvl NOT BETWEEN j.min_lvl AND j.max_lvl) ...<raise error> (I haven't tested this; I presume it would fill the variables with only the first occurrence of a business rule violation.) I'm surprised your answer favors indexes on tables that would typically be small. Thanks again!