SQL Server Performance

Fetching records from HUGE table - need help

Discussion in 'SQL Server 2005 T-SQL Performance Tuning' started by Rocky, Feb 1, 2007.

  1. Rocky New Member


    I have this huge table for activity logs which has quarter billion records in it.

    There is a query from the application to view activities, the query has been optimzed to use the right index and keys (ofcourse with NOLOCK hints, so that it will not block other usecases and transactions inserting records into the table) but still takes good amount of time to fetch data (more than 10 mins).

    Is there a way to improve the performance when I am querying a huge table?

    Your help in this regard is much appreciated.

  2. MohammedU New Member

    Can you post stats of the query which takes 10 minutes?
    How many rows your query is returning?
    Do you have right indexes? Do you delete/update records on this table? did you check the fragmentation?

    I believe you are using SQL SERVER 2005 then take a look table partitioning...

    And also you create new file group on a seprate disk and coy the table to the new filgroup to isolate from other I/O request..

  3. alzdba Member

    just to add to MohammedU's reply :

    - What volume of data are you expecting ?
    - Does it actualy use the indexes you've tuned it for ? (queyplan can change in time !!)
    - how much I/O does sqlserver need to do to fullfill you query-needs ? (and what kind of disks (rpm, cache))
  4. FrankKalis Moderator

    It would be quite interesting to see the SELECT statement in the first place. Along with the execution plan, perhaps.

    Frank Kalis
    Microsoft SQL Server MVP
  5. Rocky New Member

    Here is the select statement:

    SELECT Code_Type, tbl_name, A.r_cre_id, A.r_cre_time, cur_rec, lodgement_id

    C.BANK_ID = @P1
    AND C.LANG_ID = @P2
    AND A.FREE_TEXT_1 IN ('123454564654')
    AND A.R_CRE_TIME >= @P6
    AND A.R_CRE_TIME <= @P7
  6. satya Moderator

    Check the execution plan settings with and without usage of NOLOCK hint, I would guess it is by default to have such performance loss on that volume and at the same time depends upon the server resource with TEMPDB settings.

    Satya SKJ
    Microsoft SQL Server MVP
    Writer, Contributing Editor & Moderator
    This posting is provided AS IS with no rights for the sake of knowledge sharing. The greatest discovery of my generation is that a human being can alter his life by altering his attitudes of mind.
  7. MohammedU New Member


    Post the i/o stats of your query and how many rows it returns?
    Did your A.R_CRE_TIME column has index?

  8. joechang New Member

    it is important to look at the execution plan
    how many rows are involved in each step, not just the final

    in any case your indexes need to be something like

    table: COCR
    table: CADR
    columns: TBL_NAME, FUNC_CODE, R_CRE_TIME, FREE_TEXT_1, r_cre_id

    in additional, the columns should appended to the appropriate indexes
    cur_rec lodgement_id
  9. TheSQLGuru New Member

    1) Get bigger hardware?? <img src='/community/emoticons/emotion-1.gif' alt=':)' /> Seriously though, while this query is running check things like Avg Disk Queue Length and various I/O wait stats. Also, what IS the specifications of the machine this is running on? <br /><br />2) external disk file fragmentation?<br /><br />3) internal object (both data AND index) fragmentation?<br /><br />4) use multiple filegroups for table/indexes to improve ability to do asynchronous and parallel I/O<br /><br />5) The text of the query you posted makes me wonder if it is being called by external mechanism (i.e. ADO) that could be using a VERY BAD form of cursor for the execution. Do you get the same execution time for the query if you execute it hard-coded in QA/SSMS?<br /><br />6) Just for curiosity's sake, what is the estimated query cost for this query?? It could well be a telephone number in size. <img src='/community/emoticons/emotion-3.gif' alt=':O' /> Also, What is average number of rows per page and total page count for this table?<br /><br />7) Consider using an archival mechanism whereby you move old records to another table(s) to cut I/O. <br /><br /><img src='/community/emoticons/emotion-11.gif' alt='8)' /> On a similar vein, consider using partitioned tables to again spread the I/O around and perhaps limit total number of pages read and index depth(s).<br /><br /><br />SQLGuru
  10. Rocky New Member

    Thanks to SQL guru and other gurus who replied to this <img src='/community/emoticons/emotion-3.gif' alt=':eek:' />)

Share This Page