Fetching records from HUGE table – need help | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Fetching records from HUGE table – need help

Hi, I have this huge table for activity logs which has quarter billion records in it. There is a query from the application to view activities, the query has been optimzed to use the right index and keys (ofcourse with NOLOCK hints, so that it will not block other usecases and transactions inserting records into the table) but still takes good amount of time to fetch data (more than 10 mins). Is there a way to improve the performance when I am querying a huge table? Your help in this regard is much appreciated. -Rocky
Can you post stats of the query which takes 10 minutes?
How many rows your query is returning?
Do you have right indexes? Do you delete/update records on this table? did you check the fragmentation? I believe you are using SQL SERVER 2005 then take a look table partitioning… And also you create new file group on a seprate disk and coy the table to the new filgroup to isolate from other I/O request..
MohammedU.
Moderator
SQL-Server-Performance.com
just to add to MohammedU’s reply : – What volume of data are you expecting ?
– Does it actualy use the indexes you’ve tuned it for ? (queyplan can change in time !!)
– how much I/O does sqlserver need to do to fullfill you query-needs ? (and what kind of disks (rpm, cache))
It would be quite interesting to see the SELECT statement in the first place. Along with the execution plan, perhaps. —
Frank Kalis
Moderator
Microsoft SQL Server MVP
Webmaster:http://www.insidesql.de
Here is the select statement: SELECT Code_Type, tbl_name, A.r_cre_id, A.r_cre_time, cur_rec, lodgement_id
FROM
CADR A ,COCR C
WHERE C.BANK_ID = @P1
AND C.LANG_ID = @P2
AND C.CODE_TYPE IN ( ‘ACEV’,’OPRM’,’PAYL’,’PMTR’,’TMPL’)
AND C.CD_DESC = A.TBL_NAME
AND C.CM_CODE = A.FUNC_CODE
AND A.FREE_TEXT_1 IN (‘123454564654’)
AND A.R_CRE_TIME >= @P6
AND A.R_CRE_TIME <= @P7
ORDER BY A.R_CRE_TIME DESC

Check the execution plan settings with and without usage of NOLOCK hint, I would guess it is by default to have such performance loss on that volume and at the same time depends upon the server resource with TEMPDB settings. Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. The greatest discovery of my generation is that a human being can alter his life by altering his attitudes of mind.
Rocky, Post the i/o stats of your query and how many rows it returns?
Did your A.R_CRE_TIME column has index? MohammedU.
Moderator
SQL-Server-Performance.com
it is important to look at the execution plan
how many rows are involved in each step, not just the final in any case your indexes need to be something like table: COCR
columns: BANK_ID, LANG_ID, CODE_TYPE, CD_DESC, CM_CODE
table: CADR
columns: TBL_NAME, FUNC_CODE, R_CRE_TIME, FREE_TEXT_1, r_cre_id in additional, the columns should appended to the appropriate indexes
cur_rec lodgement_id

1) Get bigger hardware?? <img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ /> Seriously though, while this query is running check things like Avg Disk Queue Length and various I/O wait stats. Also, what IS the specifications of the machine this is running on? <br /><br />2) external disk file fragmentation?<br /><br />3) internal object (both data AND index) fragmentation?<br /><br />4) use multiple filegroups for table/indexes to improve ability to do asynchronous and parallel I/O<br /><br />5) The text of the query you posted makes me wonder if it is being called by external mechanism (i.e. ADO) that could be using a VERY BAD form of cursor for the execution. Do you get the same execution time for the query if you execute it hard-coded in QA/SSMS?<br /><br />6) Just for curiosity’s sake, what is the estimated query cost for this query?? It could well be a telephone number in size. <img src=’/community/emoticons/emotion-3.gif’ alt=’:O’ /> Also, What is average number of rows per page and total page count for this table?<br /><br />7) Consider using an archival mechanism whereby you move old records to another table(s) to cut I/O. <br /><br /><img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> On a similar vein, consider using partitioned tables to again spread the I/O around and perhaps limit total number of pages read and index depth(s).<br /><br /><br />SQLGuru
Thanks to SQL guru and other gurus who replied to this <img src=’/community/emoticons/emotion-3.gif’ alt=’:eek:‘ />)
]]>