query optimization issue with large no of records | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

query optimization issue with large no of records

Hi, I have to make a view from the database. In this view i will fetch the records from the database and will send the same to user. The query would be something like select top 700000 record0_.* , userinform1_.*
from record record0_, userinformation userinform1_
where
record0_.userinformationkey=userinform1_.userinformationkey and (record0_.eventdate>=977356800000 and record0_.eventdate<=1167264000000 and record0_.eventtime>=66600000 and record0_.eventtime<=86399999
or record0_.eventdate>=977443200000 and record0_.eventdate<=1167350400000 and record0_.eventtime>=0 and record0_.eventtime<=66540000) and (record0_.eventtype=1 or record0_.eventtype=2 or record0_.eventtype=4) order by
record0_.eventdate desc, record0_.eventtime desc This query is almost taking 50 secs . what should i do optimize it. I have already made indexes according to the database tuning advisor. Here i have limitation that i have return of all of the 7,00,000 records in one go. I can’t fetch partially. Regards,
Sunhcl
When you run DTA, did you check indexes view also?
Luis Martin
Moderator
SQL-Server-Performance.com All in Love is Fair
Stevie Wonder
All postings are provided “AS IS” with no warranties for accuracy.
Be sure to check the query plan for this to see what really is going on. —————————–
Brad M. McGehee, SQL Server MVP
Maybe you could explain the background: Why do you need to send 700 000 rows to the user? What does that mean that you have to "fetch" the data? Do you send him csv file or excel or xml with these data or you just send 700 000 rows to the client? What will user do next with these 700 000 rows? In addition to that: Do you need really to return all columns from both tables? As mentioned, you have to find out execution plan for the query to be able to tune it. Maybe rewriting the query as union would help, but it is hard to tell without knowing the execution plan.
My guess is that since the query is of union type (you select data from two ranges) and you need top 700,000 from both union branches, and you need first 700 000 based on eventdate and eventtime order, execution plan consist of selecting all rows fitting the criteria, sorting it and then keeping "just" first 700,000 rows. If so you can try to select top 700,000 rows from the ‘higher’ range. If there are less then 700,000 rows in that range, then you select top 700,000 – <number of rows from higher range> from the lower range.

If you retrieve 700000 rows , it is going to run for a while. what is the record size by the way?
I hope the delay is because of the read operation

there’s no easy way to know how to improve this without seeing the query plan, what the DDL for each table looks like, and what indexes are on the tables. www.elsasoft.org

]]>