segmented tables for performance? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

segmented tables for performance?

The app I am currently working with logs data constantly into a table where the data may be kept for anything from days to many years.
The table has the potential to get quite large so I was wondering if I should segment the table by unique id (10,000 ids per table for example) or by date (monthly tables for example). The table will probably grow to around 10 million rows after 5 years and maybe up to a maximum of 20 million rows. Is segmentation a good idea in terms of performance or doesn’t it really matter?

In my experience 20m rows is not a problem at all for sql server, as long as you have the proper indexes on it when you want to query it. if you don’t query against it often, leave it with no indexes to keep the insert perf high, and only add the indexes when you want to query it. _____________________________________ SqlSpec – a fast, cheap, and comprehensive data dictionary generator for SQL Server 2000 and 2005 – www.elsasoft.org

If you have a problem of disk free space then you should be worried, if not I would agree with Jezemine. Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing.
Thanks for that. I’ve done some small scale tests and segementation does not seem to make any difference for index based queries on tables of 30k rows vs 2 tables of 15k rows each.
The inserts appeared to be the same too, but I’m not convinced the server trace in query analyzer measures the insert properly as no write operation was shown?
Have you compared the execution plans before and after this change? Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing.
quote:Originally posted by satya Have you compared the execution plans before and after this change? Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing.

The test was very simple – I had a table of 30k rows and copied half the data into 2 more tables. I split the table on an indexed field and then queried so that both new tables would have results.
So the test ended up as:
select * from table1 where indexfield >= 1500 and indexfield <=2500
versus
select * from table1a where indexfield >= 1500 and indexfield <=2500
union all select * from table1b where indexfield >= 1500 and indexfield <=2500
The second query’s execution plan has an additional table scan and a concatenation but ends up with a server side duration similar to the first. So the only extra cost would be the concatenation as the 2 half sized table scans are equivalent to 1 fulls sized scan?

I don’t see a major issue here, also collect PERFMON (SYSMON) counters for system resources usage. Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing.
]]>