SQL Server Performance

changing identity data type from int to bigint

Discussion in 'General DBA Questions' started by ramasubbup, Jun 14, 2007.

  1. ramasubbup New Member

    Friends,

    Our SQL Server database hosts around 1.1 billion data occupying 450 GB of memory. Runs in a single server without partition.

    Every month approximately 30–35 million records getting loaded to the database.

    A single table called ‘Claim#%92 has been partitioned into 11 different claim tables some time ago to increase the performance. The single table is replaced by Union All partitioned view called ‘Claim#%92. The tables are partitioned based on column named ‘GroupCode#%92. Record count in each of these tables ranges from 30 million minimum to 200 million maximum.
    Issue:

    The primary key of the 11 claim tables [named ‘ClaimId#%92] is an IDENTITY column. Its data type is INT. Now the situation is reached where the IDENTITY column has reached maximum ‘INT#%92 value.

    1. What is the best approach to be followed to change from INT to BIGINT with minimum downtime.
    2. If I do the change to one table at at time, will my view still works?

    Thanks for your help

  2. MohammedU New Member

    I don't think there will any issue chaging the data type from int to bigint...
    There will not be any down time when you do on stan alone table...
    But I never tried with partition table...
    you can test it on your dev server but I think it should work...
    When you increase the size of the there will not any down time and it is intantaneous...


    MohammedU.
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.
  3. ramasubbup New Member

    Thanks Mohammed.
    The only thing I am worried is the volume is huge 300 million.
    When we do the data type change from int to bigint, sql server internally
    transfers the data to a temp table recreates the new table and dumps the data back
    (correct me if I am wrong here)
    during this process, we will not be able to access the table.
    So, I really don't know how much time it will take.

    The dev environment doesn't have enough space to test this huge volume

    Thanks.
  4. MohammedU New Member

    Try using ALTER TABLE ALTER COLUMN command in QA instead of using EM...
    When you do through EM, it creates temp table...but not in QA...

    create table #t (id int identity)
    alter table #t alter column id bigint


    MohammedU.
    Moderator
    SQL-Server-Performance.com

    All postings are provided “AS IS” with no warranties for accuracy.
  5. Adriaan New Member

    If you're worried about reaching the maximum value, why not take the big step and change to DECIMAL(38,0)? That will take much bigger values than BIGINT.
  6. poongodi New Member

    I am posting this on behalf of 'ramasubbup'.

    I noticed that, out of 11 tables if I change the datatype in one table and leave the rest as is, then the 'View' becomes non-updatable!
    Executing the conversion in all the table might require longer downtime of database which is another challange.

    If we agree converting all the table to 'BIGINT', again I noticed a problem. I converted all 11 tables to BIGINT [in a sample database with less data] and then compiled the VIEW. I tried executing an update over the view but the VIEW does not work! It throws 'partitioning column not found' error. So is there any guarentee that VIEW will still work if we modify the datatype in underlying tables?


    Thanks!

Share This Page