Migrating SQL Server Databases – The DBA's Checklist (Part 1)

It is a fact of life: SQL Server databases change homes. They move from one instance to another, from one version to the next, from old servers to new ones.  They move around as an organisation’s data grows, applications are enhanced or new versions of the database software are released. If not anything else, servers become old and unreliable and databases eventually need to find a new home. Consider the following scenarios:

1.     A new  database application is rolled out in a production server from the development or test environment

2.     A copy of the production database needs to be installed in a test server for troubleshooting purposes

3.     A copy of the development database is regularly refreshed in a test server during the system development life cycle

4.     A SQL Server is upgraded to a newer version. This can be an in-place upgrade or a side-by-side migration

5.     One or more databases need to be moved between different instances as part of a consolidation strategy. The instances can be running the same or different version of SQL Server

6.     A database has to be restored from a backup file provided by a third party application vendor

7.     A backup of the database is restored in the same or different instance for disaster recovery

8.     A database needs to be migrated within the same instance:

a.     Files are moved from direct attached storage to storage area network

b.    The same database is copied under a different name for another application

Migrating SQL Server database applications is a complex topic in itself. There are a number of components that can be involved: jobs, DTS or SSIS packages, logins or linked servers are only few pieces of the puzzle. However, in this article we will focus only on the central part of migration: the installation of the database itself.

Unless it is an in-place upgrade, typically the database is taken from a source server and installed in a destination instance.  Most of the time, a full backup file is used for the rollout. The backup file is either provided to the DBA or the DBA takes the backup and restores it in the target server. Sometimes the database is detached from the source and the files are copied to and attached in the destination.

Regardless of the method of copying, moving, refreshing, restoring or upgrading the physical database, there are a number of steps the DBA should follow before and after it has been installed in the destination. It is these post database installation steps we are going to discuss below. Some of these steps apply in almost every scenario described above while some will depend on the type of objects contained within the database.  Also, the principles hold regardless of the number of databases involved.

Step 1:  Make a copy of data and log files when attaching and detaching

When detaching and attaching databases, ensure you have made copies of the data and log files if the destination is running a newer version of SQL Server. This is because once attached to a newer version, the database cannot be detached and attached back to an older version. Trying to do so will give you a message like the following:

Server: Msg 602, Level 21, State 50, Line 1

Could not find row in sysindexes for database ID 6, object ID 1, index ID 1. Run DBCC CHECKTABLE on sysindexes.

Connection Broken

If you try to backup the attached database and restore it in the source, it will still fail.

Similarly, if you are restoring the database in a newer version, it cannot be backed up or detached and put back in an older version of SQL. Unlike detach and attach method though, you do not lose the backup file or the original database here.

When detaching and attaching a database, it is important you keep all the log files available along with the data files. It is possible to attach a database without a log file and SQL Server can be instructed to create a new log file, however this does not work if the database was detached when the primary file group was read-only. You will need all the log files in such cases.

Step 2: Change database compatibility level

Once the database has been restored or attached to a newer version of SQL Server, change the database compatibility level to reflect the newer version unless there is a compelling reason not to do so. When attaching or restoring from a previous version of SQL, the database retains the older version’s compatibility level.  The only time you would want to keep a database with an older compatibility level is when the code within your database is no longer supported by SQL Server. For example, outer joins with *= or the =* operators were still possible in SQL 2000 (with a warning message), but not in SQL 2005 anymore. If your stored procedures or triggers are using this form of join, you would want to keep the database with an older compatibility level.  For a list of compatibility issues between older and newer versions of SQL Server databases, refer to the Books Online under the sp_dbcmptlevel topic.

Application developers and architects can help you in deciding whether you should change the compatibility level or not. You can always change the compatibility mode from the newest to an older version if necessary. To change the compatibility level, you can either use the database’s property from the SQL Server Management Studio or use the sp_dbcmptlevel stored procedure.

 

Bear in mind that you cannot run the built-in reports for databases from SQL Server Management Studio if you keep the database with an older compatibility level. The following figure shows the error message I received when trying to run the “Disk Usage by Top Tables” report against a database. This database was hosted in a SQL Server 2005 system and still had a compatibility mode 80 (SQL 2000).

 

Step 3: Run DBCC UPDATEUSAGE

Run the DBCC UPDATEUSAGE command against the migrated database when upgrading to a newer version of SQL Server. This is particularly important if you are migrating from a pre-SQL Server 2005 version. Up until SQL 2005, the table or index row counts and the page counts for data, leaf, and reserved pages could become out of synch with time in the database system tables. DBCC UPDATEUSAGE command corrects these inaccuracies and ensures the counts are updated.

To run the DBCC UPDATEUSAGE command against your target database, execute the following:

DBCC UPDATEUSAGE(‘database_name‘) WITH COUNT_ROWS

If there are no inaccuracies found, the command does not return any data. If there are inaccuracies found and DBCC UPDATE USAGE fixes these inaccuracies, the detailed report is shown in the output.

You should run the DBCC UPDATEUSAGE command before you run DBCC CHECKDB (discussed next). From SQL Server 2005, the DBCC CHECKDB command has been modified to trap errors related to incorrect row or page counts. However If the counts are wrong in the source database and you run DBCC CHECKDB against your newly restored/attached database, you will get a warning message like the following:

Msg 2508, Level 16, State 3, Line 1

The In-row data RSVD page count for object “table_name“, index ID 0, partition ID 58037252456448, alloc unit ID 58037252456448 (type In-row data) is incorrect. Run DBCC UPDATEUSAGE.

Step 4: Run DBCC CHECKDB

DBCC CHECKDB is a commonly used command that can check the allocation, structural and logical integrity of a database and its objects. Although it can be used to fix minor issues with the built-in REPAIR options, they should not be used the first time you are running it against your migrated database. This is because you want to know if the database you just installed is structurally okay. You would execute a command like the following:

DBCC CHECKDB(‘database_name‘) WITH ALL_ERRORMSGS

Starting from SQL Server 2005, DBCC CHECKDB has another option: DATA_PURITY.  This option can detect field values that are invalid or out-of-range. An example of such an anomaly would be a datetime field with values outside the acceptable range. For databases upgraded from pre-SQL Server 2005 versions, DBCC CHECKDB does not automatically check column values until the command has been run with the DATA_PURITY option error free. After that, you do not need to specify this option as DBCC CHECKDB will automatically check for entity integrity in the field values.

If everything is okay, the command will run successfully and you will see something like the following at the end of the output:

CHECKDB found 0 allocation errors and 0 consistency errors in database ‘database_name‘.

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

Step 5: Update statistics

 

The AUTO_CREATE_STATISTICS option ensures SQL Server automatically generates any missing statistics during the query optimisation phase. As statistics can become out-of-date and stale over time, setting the AUTO_UPDATE_STATISTICS option to ON ensures the query optimizer will automatically update any out-of-date stats during optimisation.

I would usually keep the “Auto Update Statistics Asynchronously” option to its default value of “false”. This option – introduced in SQL Server 2005 and also present in SQL Server 2008 – controls how SQL Server will behave if it finds any out-of-date statistics during the optimisation phase. Setting this option to “true” will let the query run with the old statistics. It will then update the stats outside the query.  This ensures subsequent runs of the query will use the new statistics. When this property is set to false, query optimiser will wait till new statistics are generated.

_________________________
Part 2 of the series will examine setting database options, mapping database users to logins, recovery models, backups, and changing the code associated with configurations
]]>

Leave a comment

Your email address will not be published.