sql server database with 1.7 million record | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

sql server database with 1.7 million record

Hi,<br /><br />I have a sql server database with 1.7 million records in a table, with<br />about 30 fields.<br /><br />When I run select * from tablename it can take over 5 minutes.<br /><br />Before you ask, I do want to return all columns and all rows [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />I have an non clustered index on the id field<br /><br />How can I get this time down or is it normal?<br /><br />Thanks<br />Jerry
You want to return 1.7 million rows?? I don’t believe that. Why? This query will run as quick as your network and disk system will allow. You will probably find it is your network causing the bottleneck.
I’ll second Simons’ questions. Can you provide more information?
Any index is in your case wasted, since you will always get a table scan. —
Frank Kalis
Microsoft SQL Server MVP
http://www.insidesql.de
Heute schon gebloggt?http://www.insidesql.de/blogs
Ich unterstütze PASS Deutschland e.V. http://www.sqlpass.de)

1. I’ll third the other responses. No end user can possibly sit and scroll through that many records.
2. Without asking you the question as to "do you really want it all", my guess is that you are trying to do something else with the data other than "presenting" it to a user. Which means that you could likely put the results into a data file and then transmit/transfer the file faster than you can use the DBCLIENT library to transfer that much data.
3. If you have a lot of CHAR type fields which are padded, you could consider using RTRIM on those fields so that you are sending millions of blank spaces across the network. So instead of Select * you could issue a command like: Select field1, field2, field3, RTRIM(field4), field5, RTRIM(field5) etc. Not sure how much improvement you’d see not knowing your data structure.
4. Ok, so I’m not that strong and I gotta ask: WHY?
Hi,<br /><br />I thought there would be a lot of inquesitive minds[<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />The desktop application we are thinking of doing is for users to query an address database. <br /><br />As well as allowing direct quering…<br /><br />My idea is to bring back all the addresses into a datagrid and then allow users to apply filters on different coloumns as they search for data. The filters would work client side, as opposed to on the database. <br /><br />Address usage is not consistant so our searching algorithim cannot catch all address, (i.e users may use local names whereas our address database would have the official name)<br /><br />There are other ways to do this, obviously, but the grid control has built in filters and it would be a bit quicker to develop to just bring back all the records and then do all the filtering on the datagrid.<br /><br />Any ideas<br /><br />Thanks<br />Jerry<br /><br />P.S.<br />.NET Framework 1.1<br />C#<br />SQL Server 2000
I’d suspect that your address queruieng doesn’t have to be up to the minute data. Thus, you could probably get away with having some DTS package run 1 time every X hours, and then dump the 1.7 million records out to a flat file on your network. The desktop applications could then load the data from the output file instead of having to pound the database. If they are making changes you do have the ID field of the record, so you could go out and pull just that record from the live database. Just a thought
For these kind of numbers of rows, you can’t really wait for all the data to arrive in the client app, that’s just plain silly! You need some sort of paging – let them have data grids where they can select from relevant Country/State, City, or enter a zipcode (range), name starting with ‘A’ – whatever … Then return matching names, with for example a zipcode – something that lets them drill down in the list returned. Next, let them select the name(s) that they’re interested in, and return the details in full.
Sending that amount of data over the network for a simple filter and search is really a waste of resources. If you want to give your users maximum flexibility on defining search criterias, you might want to check out:http://www.sommarskog.se/dyn-search.html
Usually you would want to stay away from dynamic sql as for as possible. However, when it comes to complex searches like you have in mind, dynamic sql is mostly the clear winner. Both in terms of performance and code maintainability. —
Frank Kalis
Microsoft SQL Server MVP
http://www.insidesql.de
Heute schon gebloggt?http://www.insidesql.de/blogs
Ich unterstütze PASS Deutschland e.V. http://www.sqlpass.de)

]]>