Help optimise a simple select * from table query | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Help optimise a simple select * from table query

Hi Guys
I need some help understanding slowness of a table and query speeding up options.
I have a main table A which has the structure like this CREATE TABLE [dbo].[Records] (
[RecordsId] [int] IDENTITY (1, 1) NOT NULL ,
[Title] [nvarchar] (10) ,
[FirstName] [nvarchar] (50) ,
[LastName] [nvarchar] (50) ,
[nvarchar] (100) ,
[Password] [nvarchar] (50) ,
[Age] [smallint] NULL ,
[Address1] [nvarchar] (50) ,
[Address2] [nvarchar] (50) ,
[Street] [nvarchar] (50) ,
[Town] [nvarchar] (50) ,
[County] [nvarchar] (50) ,
[PostCode] [nvarchar] (10) ,
[DateCreated] [datetime] NULL ,
[Resume] [nvarchar] (100) ,
[Letter] [nvarchar] (100) ,
[OtherInfo] [varchar] (8000) ,
[DOB] [smalldatetime] NULL ,
[ApplicationForm] [varchar] (100) ,
[UserName] [varchar] (255) ,
[EducationQualifications] [text] ,
[EmployerHistory] [text] ,
[Telephone] [varchar] (255) ,
[Mobilephone] [varchar] (255) ,
[Gender] [varchar] (50) In order to test the database for heavy load I pumped into 100,000 records as the starting point and I am still stuck here. A simple "select * from Records" takes 45 seconds to complete. RecordsId is a primary key. All my queries join this table and there are soem real complex queries I need to optimise. But I find myself stranded optimising this simple query itself. Where am i wrong in the design ? Thanks a bunch folks
webby
Create a clustered index on the primary key and make the identity column as
primary key and make clustered index on that.
Use set nocount on and off sql to suppress the rows affected messages.
Save ur select query in SP and run the SP to check iff it helps.
The index is a clustered index and it is the identity column. One of the fields is a varchar ( 8000 ) field. The inclusion of this in the select seems to slow teh query quite a lot. But I do need this field in the select. Any ways of optimising this ? Thanks
quote:In order to test the database for heavy load I pumped into 100,000 records as the starting point and I am still stuck here. A simple "select * from Records" takes 45 seconds to complete. RecordsId is a primary key. All my queries join this table and there are soem real complex queries I need to optimise. But I find myself stranded optimising this simple query itself. Where am i wrong in the design ?
You can’t expect better performance returning text column for 100,000 rows. There is text in row option that can be used. If it is really better to use it, depends on your data and usage pattern.
]]>