Hello, I am working on a very large table which has over 200 million rows. The table is actually very simple and can not be normalized further. CREATE TABLE [dbo].[Company]( [ID] [bigint] IDENTITY(1,1) NOT NULL, [CompanyName] [nvarchar](255) NOT NULL, CONSTRAINT [PK_Company] PRIMARY KEY CLUSTERED ( [ID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] What I am trying to do is get a count of records based on criteria. So for example: SELECT count(ID) FROM dbo.Company WHERE CompanyName LIKE 'a%' Yes I have a clustered index set up on ID and a non-clustered index on companyname. The database is on a dual-core 2.4Ghz SQL Server (standard) with 2GB RAM. The count is taking over 1 minute and returning a count of 20 million. Remember, I am just interested in the "count" not the rows. Any clues or ideas on how to optimize? Thanks in advanced!