SQL Server Performance

Partitioning HUGE tables, no downtime.

Discussion in 'SQL Server 2008 General DBA Questions' started by Righteousman, Aug 19, 2009.

  1. Righteousman New Member

    Hi Guys,
    Here's my current situation. We have some tables that have over 1 billion rows and they've never been partitioned. We need to partition them by quarter (or maybe even month), but the problem is that these tables are constantly being accessed with records being inserted or deleted on a regular basis by our applications.
    Initially I thought I could just created a new table with the identical structure of the existing table, copy the data in and do a quick rename. The problem here is that by the time I've copied all of the data over to the new table, there are tons of records in the original table that have been deleted/inserted, so the tables are out of synch. This synchronization process takes a long time (over 8 hours) so it seems like I can never catch up.
    Has anyone had experience partitioning huge tables without having any downtime? Any tips would be greatly appreciated.
    Thanks
  2. MohammedU New Member

    Why don't you create a trigger on source table to write into destination (partitioned table) while you are populating the data so that there will not be much to sync when you do the cutover(rename).
  3. Righteousman New Member

    That's a really good idea! I'll give that a try, thank you!
  4. SQLWorld New Member

    Adding a trigger looks a good idea dont you think that will degrade the perf. My situtation was different I partitioned table in non peak hour by adding the partioned column to an existing cluster index and recreated the cluste index there were 7 millions records and was succcessfully partioned.
    You can try adding trigger and let us know the outcome [Y]

Share This Page