SQL Server Performance

One large query Vs several small queries

Discussion in 'SQL Server 2008 General Developer Questions' started by utop, May 13, 2010.

  1. utop New Member

    I have been using SQL server for the past 2 years. I don't know if this question has been asked earlier but I didn't find it anywhere.What I have observed is large queries with a lot of joins are slower compared to breaking the big query into several small queries for e.g. if we have a query like:select tbl1.name,......tbl12.idfrom tbl1join tbl2 ON tbl1.tbl12_id = tbl2.tbl12_id.....(all 12 tables joined)where tbl1.name='abc'Where tbl1 has million+ rowsis much slower than sayInsert into @tmpselect [required columns from tbl1]from tbl1where tbl1.name='abc'and then using @t instead of tbl1.This is just an example. I have observed this in several cases.So my question is: Is it a known and accepted pratice to break a large query into smaller ones for performance or is just that I am not writing the large queries properly?If it is a common practice then what is the reason for the query optimizer not being able to break the large query optimally?
  2. Madhivanan Moderator

    Yes. Breaking into small peices may speed up the performance
    I had a following query sometimes back
    select * into new_table from
    (
    select columns from table1 where ..
    union all
    select columns from table2 where ..
    union all
    .
    .
    select columns from table10 where ..
    ) as t
    Which took lot of time
    I broke down to

    select columns into new_table from table1 where...
    insert into new_table
    select columns from table 2
    insert into new_table
    select columns from table 3
    .
    .
    .
    insert into new_table
    select columns from table 10
    which improved the performance
  3. utop New Member

    Thanks for the quick response Madhivanan . What could be the reason for this? Should we avoid writing big queries always? Is it good to be using temp tables instead of a big query?
  4. Madhivanan Moderator

    [quote user="utop"]Thanks for the quick response Madhivanan . What could be the reason for this? Should we avoid writing big queries always? Is it good to be using temp tables instead of a big query?[/quote]
    It depnds on the situation
  5. RichHolt New Member

    [quote user="utop"]Thanks for the quick response Madhivanan . What could be the reason for this? Should we avoid writing big queries always? Is it good to be using temp tables instead of a big query?[/quote]
    In some cases, a table variable will be better than a temp table; in other cases, a temp table will be faster than a table vairable. And sometimes, based on the complexity of the query, the available indexes, and the amount of data you are retrieving, a query directly against the base table is fine. The best thing to do it try it and see. Use the available profiling tools to help you, like STATISICS IO, the SQL Profiler, and Execution Plans.
  6. alent1234 New Member

    i always try to query the minimum amount of data. we have some databases in the TB range and people always select months and years of data at the same time and complain it runs for hours. we tell them select less data. they never listen.
    reason it's faster is there is less data to send over the network. and at some point index seeks will turn into scans and this really slows things down

Share This Page