SQL Server Performance

Memory Based Temp Table Performance

Discussion in 'Analysis Services/Data Warehousing' started by gedixon, Oct 22, 2003.

  1. gedixon New Member

    I am looking for a little advice for a reporting approach. I am currently working on a relatively large scale reporting application that when run can build a report for data spanning anywhere from 1-220 tables. I preferrably would like to use memory based temp tables created on the fly in my stored procedures however I am not sure whether i will hit a huge performance issue with data spanning as many as 200+ tables. Could possibly be as many as 2 or 3 million records used to build a report. With this many records am I going to run into performance issues with using memory based temp tables? If so ... is there a more appropriate approach such as using tables actually written to disk? Any insights and help would be greatly appreciated.

    Thanks
  2. satya Moderator

    What are the hardware resources such as CPU, Memory, disks etc.
    How about level of SQL and service pack used?
    Any network issues?
    What is the user base and how about Database health checks and maint.plans?

    Anytime have you used PROFILER to monitor the slow running queries?

    _________
    Satya SKJ
    Moderator
    SQL-Server-Performance.Com
  3. Twan New Member


    Yes you will run into performance problems... Memory based tables are designed for a small number of rows (say less than a few thousand) You can't index them therefore they will all cause table scans. Joining lots of them together would be a nightmare.

    Best to create either # temp tables or standard tables...

    Cheers
    Twan
  4. gedixon New Member

    I have two servers that the stored procs could run off of.

    One has 1gb of memory and 2 x 1.2 PIII processors.

    The other has 4gb of memory and 4 x 1.8 PIIII processors.

    Both have SQL Server 2000 and SP3 installed. Would the approach be possible on both of these machines? Network traffic is not a concern as the procedures for reporting will be running on the same machine the dbs are on.
  5. gedixon New Member

    Aren't #Temp tables also created in memory? Or are they written to disk but have scope in reference to the stored procedure? Maybe i am a little confused about this. When i was talking about temp tables i meant #temp 's. Thanks!!
  6. Twan New Member

    #temp tables are really just regular tables, but created in tempdb and automatically given a unique name for each user (as opposed to table variables)

    A #temp table can be indexed which can be better for performance.

    You may still have your work cut out in terms of performance tuning, but there won't be an issue with memory as such. The amount of memory required will depend on the amount of data we're talking about readingwriting

    After writing some code you may want to use perfmon and profiler to get an idea of performance and where the bottlenecks may be

    Cheers
    Twan
  7. ykchakri New Member

    Both #temp tables and table variables are stored as regular tables in tempdb. But table variables take less locking and logging resources minimizes recompilations of stored procedures.

    But, because of the high size and number of such tables in your case, it will be more benificial to use #temps in your case. Because, then you will be able to index these tables and will be able to gain performance on data retrival which will outweigh the benifits of table variables.
  8. homam New Member

    If you don't have a data warehouse in place, why don't you create some kind of mini-ETL process that periodically pulls updated information from the 200 OLTP tables and place it in some kind of denormalized dimensional form? Then you can run your reports off of the dimensional tables.

    The idea of hitting 200 tables each time you generate a single report is extremely inefficient. What if you have 20 reports? Do you want to re-execute all these joins with the 200 tables for each one of them?

    With a mini-ETL process, you can run hundreds of report off of the same cached data and you won't stress the live system with expensive joins every time you generate one. While it's true the mini-ETL is going to execute virtually the same joins, first, subsequent extracts affect much less data (changed) hence much faster, and second, you virtually cache the joins to be utilized by many reports.
  9. Twan New Member

    <br />"table variables are stored as regular tables in tempdb"<br /><br />I stand corrected <img src='/community/emoticons/emotion-5.gif' alt=';-)' /><br /><br />Cheers<br />Twan

Share This Page