Migrating SQL Server Databases – The DBA's Checklist (Part 3)

Continuing from Part 2 of the Database Migration Checklist series:

Step 10: Full-text catalogs and full-text indexing

This is one area of SQL Server where people do not seem to take notice unless something goes wrong. Full-text functionality is a specialised area in database application development and is not usually implemented in your everyday OLTP systems.

Nevertheless, if you are migrating a database that uses full-text indexing on one or more tables, you need to be aware a few points.

First of all, SQL Server 2005 now allows full-text catalog files to be restored or attached along with the rest of the database. However, after migration, if you are unable to look at the properties of any full-text catalogs, you are probably better off dropping and recreating it. You may also get the following error messages along the way:

Msg 9954, Level 16, State 2, Line 1

The Full-Text Service (msftesql) is disabled. The system administrator must enable this service.

This basically means full text service is not running (disabled or stopped) in the destination instance. You will need to start it from the Configuration Manager.

Similarly, if you get the following message, you will also need to drop and recreate the catalog and populate it.

Msg 7624, Level 16, State 1, Line 1

Full-text catalog ‘catalog_name‘ is in an unusable state. Drop and re-create this full-text catalog.

A full population of full-text indexes can be a time and resource intensive operation. Obviously you will want to schedule it for low usage hours if the database is restored in an existing production server.

Also, bear in mind that any scheduled job that existed in the source server for populating the full text catalog (e.g. nightly process for incremental update) will need to be re-created in the destination.

Step 11: Database collation considerations

Another sticky area to consider during a migration is the collation setting. Ideally you would want to restore or attach the database in a SQL Server instance with the same collation. Although not used commonly, SQL Server allows you to change a database’s collation by using the ALTER DATABASE command:

ALTER DATABASE database_name COLLATE collation_name

You should not be using this command for no reason as it can get really dangerous.  When you change the database collation, it does not change the collation of the existing user table columns.  However the columns of every new table, every new UDT and subsequently created variables or parameters in code will use the new setting. The collation of every char, nchar, varchar, nvarchar, text or ntext field of the system tables will also be changed. Stored procedure and function parameters will be changed to the new collation and finally, every character-based system data type and user defined data types will also be affected.

And the change may not be successful either if there are dependent objects involved. You may get one or multiple messages like the following:

Cannot ALTER ‘object_name‘ because it is being referenced by object ‘dependent_object_name‘.

That is why it is important to test and check for collation related issues. Collation also affects queries that use comparisons of character-based data.  If errors arise due to two sides of a comparison being in different collation orders, the COLLATE keyword can be used to cast one side to the same collation as the other.

Step 12: Service Broker considerations

Service broker is a new feature available from SQL Server 2005 that allows database applications to take advantage of asynchronous messaging between senders and recipients.  When restoring or attaching a broker enabled database, be mindful of two things: broker identifier and message delivery status.

Each SQL Server 2005 database has a unique “broker identifier” that sets it apart from any other database in the network. The broker identifier is a GUID and is assigned by SQL Server every time a new database is created. You can have a look at the broker identifier of the databases in an instance by executing the following command:

SELECT name, service_broker_guid FROM sys.databases

SQL Server assigns this identifier so that all messages for a service broker conversation are rightfully delivered to the correct database. If you think about it, this is required: if you want to talk to a person in a meeting, you would want to address him by name. And if more than one person around the table has the same name, you would want to ensure only the right person gets the message. So you will follow some method:  looking at the person and perhaps using a polite gesture of hand to mean you are referring to him. Likewise, SQL Server also wants to ensure messages know what their destination database is. Hence this unique way of identifying a database is used.

Apart from assigning the broker identifier, the database can be marked as “broker enabled”. When broker enabled, service broker ensures messages are delivered from the database and oncoming messages can also reach the services defined within the database.

If you want to see which databases are broker enabled in the current instance, you can execute a query like the following:

SELECT name, is_broker_enabled FROM sys.databases

However, what happens when you are restoring or attaching a database in an instance that has already another database with the same service broker GUID? This is a valid situation – you can be restoring the same database in the same or different instance multiple times under different names. How does SQL Server know what database oncoming messages will be routed to?

The answer is SQL Server will disable message delivery (i.e. disable broker) in such cases. When you restore or attach a database, the service broker identifier remains intact.  The message delivery status is also preserved if there are no other databases in the instance with the same identifier. However, if you restore or attach the same database again, the message delivery option will be turned off in the newly attached / restored database. When message delivery option is turned off, no messages from the database will be going out and service broker will not recognise the services defined within the database as eligible for receiving messages. This is a bit like being suspended out of a game indefinitely.

As an example, I restored three copies of the AdventureWorks database in an instance. I then executed the following command:

SELECT name, is_broker_enabled, service_broker_guid
FROM  sys.databases
WHERE  name LIKE ‘Ad%’

The result looks like the following:

As you can see, all three databases have the same service broker GUID, but only one database has the broker enabled.

You can enable a database’s message delivery status by using the following command:

ALTER DATABASE database_name SET ENABLE_BROKER

However, when you try to enable service broker for a database in an instance where another database already exists with the same service broker GUID and is broker-enabled, you will get a message like the following:

Msg 9772, Level 16, State 1, Line 1

The Service Broker in database “database_name” cannot be enabled because there is already an enabled Service Broker with the same ID.

Msg 5069, Level 16, State 1, Line 1

ALTER DATABASE statement failed.

This is why you need to be careful when restoring or attaching a service broker database application so that only the correct database can be made broker-enabled.

Step 13: Encryption: enable the database master key

SQL Server data can now be encrypted. SQL Server 2005 first introduced the capability to encrypt individual columns within tables. SQL Server 2008 enhanced that feature to the database level. With Transparent Data Encryption (TDE), the entire database can now be encrypted.

We will not discuss how to migrate SQL Server 2008 encrypted databases, but touch upon what needs to be done if you have a SQL Server 2005 database at hand with encrypted columns in it.

Encryption works in a hierarchical manner in SQL Server. Each level of hierarchy represents a method of encrypting the level below it. The mechanism of encryption is implemented through a combination of keys and certificates. To decrypt a lower level, you will need to decrypt the higher level first.

At the very top of the encryption tree is the Service Master Key. Service master key is generated by the instance. It is used to encrypt the Database Master Key of the database where encryption is to take place. You create a database master key in each database where columns are to be encrypted. A copy of the database master key is saved inside the database and also in the master database.

Database master keys can be used to encrypt Certificates or Asymmetric Keys within the database. Certificates and asymmetric keys use public key – private key mechanisms for encryption and decryption. Certificates can be used to encrypt both data and symmetric keys. Like certificates, asymmetric keys can encrypt and decrypt symmetric keys and data.

At the bottom of the hierarchy are Symmetric Keys. Symmetric keys can be used to encrypt the data in a table column.

So in essence, a table’s column can be encrypted using a symmetric key, which in turn can be encrypted either by an asymmetric key or a certificate, which can in turn be encrypted by the database master key. Encrypting the database master key will be the service master key.

However, you do not have to use the whole hierarchy for encryption.  For example, a certificate alone can be used to encrypt the column’s data.

When restoring or attaching a database with encrypted columns in a different instance, the main thing to be aware of is that you will need to OPEN the database master key before decryption can happen. Restoring to the same instance is not a problem.

In the following example, I encrypted the CardNumber column of the Sales.CreditCard table in the AdventureWorks database. I had created a certificate for the encryption purpose and encrypted the certificate with the database master key.

USE AdventureWorks

GO

ALTER TABLE Sales.CreditCard ADD CardNumberEncrypted varbinary(500) NULL

GO

CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘TestMasterKey’

GO

CREATE CERTIFICATE TestCertificate WITH SUBJECT = ‘Test Certificate’, START_DATE = ’08/26/2009′

GO

UPDATE Sales.CreditCard SET CardNumberEncrypted = EncryptByCert(Cert_ID(‘TestCertificate’), CardNumber)

GO

SELECT  TOP 100 CardNumber, CONVERT(nvarchar(25), DecryptByCert(Cert_ID(‘TestCertificate’), CardNumberEncrypted)) AS DecryptedCardNumber

FROM  Sales.CreditCard

GO

The result of the last SELECT query is shown below:

Next, I restored the database in a separate SQL Server instance.  When I tried to run the same SELECT query in the newly restored database, the decryption does not work: the encrypted column is shown as NULL.

To enable decryption, I had to first OPEN the database master key using the same password. The OPEN command works for the same session only: the decryption capability does not work for other sessions or when the current session closes. So I had to ALTER the database master key to associate it with the current instance’s service master key:

OPEN MASTER KEY DECRYPTION BY PASSWORD = ‘TestMasterKey’

GO

ALTER MASTER KEY ADD ENCRYPTION BY SERVICE MASTER KEY

GO

And then the decryption works.

So this is something to keep in mind when you are migrating databases with encrypted columns. You may want to backup the database master key from the source instance before migration.

Step 14: Consider high-availability requirements

High availability for SQL Server databases can come in few different forms: snapshot, log shipping, mirroring, or replication being the main ones apart from clustering.

You need to be aware of any existing high availability mechanisms for the source database before migrating it. A restored database does not maintain its original log shipping or mirroring configurations. If the migrated database needs to be log shipped or mirrored to the same secondary site, you will need to disable log shipping or mirroring for the old database first. Once the migration is complete, you will need to re-configure log shipping or mirroring.

The same holds true if your source database is part of a replication. When migrated to a new instance, you will need to recreate and reconfigure replication. If you are doing an in-place upgrade to a newer version of SQL Server like 2005, the setup application will automatically invoke the sp_vugrade_replication system procedure. This process upgrades schemas and metadata needed by the replication infrastructure.

Unlike other high availability methods, migrating from or to a clustered instance does not require any post installation configuration on the database part. You can take a backup of the database from a clustered instance and restore it in a non-clustered instance or vice versa.

Conclusion: Some final thoughts

We have tried to list a number of areas that the DBA should be looking at during a database’s migration. To help the process, answers to a number of questions should also be sought:

1.     How many databases are involved in the migration? Obviously migrating ten different databases will entail more effort and planning than migrating only one database.

2.     What is the nature of the database(s) being migrated? This should help you decide the instance where the database should be migrated to. For example, you would not want to install a data-mart in a high traffic OLTP system. Similarly, you will not probably be bothering about backups and disaster recovery if your migration is part of a regular refresh process in a test environment.

3.     If the database is already in production, what timeframe will be available for the migration? Backup/restore or detach/attach will take some time and you would want to minimise this time as much as possible. Also, you may want to know what services and applications are accessing the source database and how they need to be stopped. For example if you are migrating a web based application, you will probably want to talk with your system administrators about shutting down Apache or IIS. Users will need to be made aware of any system unavailability.

4.     Do you need to consider the size and growth characteristics of the database once it has been migrated? If the database has been growing fast and it needs to be migrated because the existing hardware resources are insufficient for the performance required, you will probably be changing the database file size, growth increments etc. once it has been migrated. You will also probably be tuning CPU and memory in the new instance.

5.     Has the destination server enough hardware resources (disk, CPU, memory, network capacity) to serve the new, migrated database? Will the database be competing for resources with any existing database application? Management may want you to install a new instance for the database in a server where one or more instances could already be running. You need to be aware (and make management aware) that additional instances will consume additional hardware resources.

6.     If the destination server is a separate physical system, does it have an optimal disk configuration? If your old database server had separate physical disks for data, log, backup and tempdb files, you probably would not want to migrate it to a server where only two physical drives of C: and D: are present.

7.      What other “peripheral” components will be involved with the migration of the database? As we mentioned at the very beginning, migrating a database application can be a complex task. Nevertheless, you will still have to go through the process of migrating jobs, logins, custom error messages, file shares, reporting services solutions, SSIS or DTS packages etc.

8.     What user accounts will be accessing the migrated database? Will there be any new user accounts? Will there be any fewer accounts? When migrating a development database to a production server, you may need to account for end-users who were not involved during the testing phase and whose user accounts did not exist in the development database.

9.     Will the migrated database have any special backup or disaster recovery requirements? Business requirements may dictate a database to be backed up more frequently than other databases in the same instance.

10.  Is there any provision for post migration testing? Have test cases been developed? You would want to ensure everything has been migrated and all configurations have been checked before switching the applications to the new database.

]]>

Leave a comment

Your email address will not be published.