Hi, A system I'm looking at has got a 400+ user base on a 4 processor server with 4gb RAM. This hosts three databases; 1 - Main database - 400 tables, heavy querying going on. 2 - Login database - 2 table, very little activity. 3 - Info database - 5 tables, relatively little activity. Everything has been going fine but in the last month or so on the Info database a select for a row on this database has been running sometimes at 1.8 seconds to retrieve a row. The select is against a single table with about 550,000 rows. The WHERE clause is simple: WHERE key_field = 0x0...(guid)...0000. Performance is poor when called by apps and when the query is executed from Query Analyzer. The key field is covered as the only field in a unique, non-clustered index. Other indexes on the table are: Clustered, non-unique, guid+name+flag+date Non-Clustered, non-unique, guid Putting a backup of the Info database onto another server has allowed me to select 100 rows at randon (DBCC DROPCLEANBUFFERS before the testing) in less than two seconds. The execution plan on the problem server shows that an index seek is being performed. Any ideas why the performance might be so dire when querying this table by an indexed column? Thanks!