Read the SQL-Server-Performance.Com Book Review
Robin Dewson is the author of the Wrox Press book, Beginning SQL Server 2000 Programming, and the co-writer of several other SQL Server-related books, including Professional Access 2000 Programming, Professional SQL Server Development with Access 2000, Professional SQL Server DTS, and Professional SQL Server 2000 Programming. He is a SQL Server and Visual Basic consultant for Lehman Brothers in London, England.
In this exclusive interview, Robin shares some of his knowledge and experience on how to get the best performance from SQL Server.
What would you consider would be the top five ways a developer can maximize the performance of a SQL Server-based application? In other words, where should developers be focusing their time and efforts in order to maximize the performance of their SQL Server-based applications?
First of all, I am assuming that this is an OLTP system. Although performance is essential in an OLAP or historical system, this receives a different set of focal points.
In my mind, the number one point has to be indexes. Whether the data is de-normalized or not, without efficient and well-planned indexes, it could take a very long time to compute or return data. Looking at areas in each table where a system will be retrieving data uniquely, or from a certain point onwards, or backwards, is always a good place to start.
You would then move on to ensure that each table contained the correct information. I am not a strong advocate of full normalization. However, you have to take normalizing your data to the correct degree. Don’t move data out to another table just to meet normalization rules when in fact, from the point of view of speed, it would be much better contained within the original table itself. For example, don’t hold a table just of US States with an identity column which is then used to link to an identity column within the Customer table.
Moving on to the third point, I would suggest avoiding bottlenecks on your system. If you have several databases on a single server, and all of the databases are heavily used, then the server will be under severe pressure. It would be advisable to move databases around, or perhaps move the transaction logs to a specific server. You must always keep checking on disk space, memory usage, network congestion, even indexes, which can all cause bottlenecks or over usage of a system
The fourth area, which perhaps is less commonly found, but nevertheless crucial, is deadlocking. When building stored procedures, it is imperative that transactions are kept as short as possible. Also, try to avoid nesting transactions down a number of levels. Every extra millisecond that a transaction is in progress, is a millisecond that another transaction could come along waiting for that table item.
Finally, cursors. If at all possible, avoid them. This can be a bit of a contentious issue, but compared to working with a table, they are very slow indeed. When building a stored procedure, try to work with temporary tables rather than a cursor. If you are using a cursor on a small number of records then performance won’t be so bad, but there will be a major degradation. For large tables, try to find a way around it.
These points are, of course, seen from a developer’s viewpoint. There is another area which all users should ensure is carefully monitored, and that is to ensure that statistics for a table are kept up to date. Not doing this can easily turn a good, well designed system, in to one that is very slow in response.
What do you consider would be the top three mistakes developers make that affect the performance of their SQL Server-based applications? And how can they avoid them?
Perhaps one of the biggest mistakes is not adequately monitoring a live, settled and working system for performance changes. These could come from changes in data access, in other words, how users use the system, perhaps through a change in working practice. They would also come from not updating statistics, ensuring that SQL Server is aware of the correct indexes or when to correctly table scan. Ignoring a system once it has gone live is not the best action you can take from a performance viewpoint.
Secondly, not writing efficient stored procedures where a stored procedure uses T-SQL commands with the correct indexes. Perhaps cursors will be used. Even worse is not having modularized stored procedures, so that you have a handful of very large stored procedures, rather than small, more manageable, units of work.
Finally, not taking in to account locking issues when building code that uses transactions. Holding on to a lock even for a second or so longer than is necessary can have disastrous effects on a system’s performance. Other transactions will be held up waiting until the original transactions complete, or worse still a deadlock occurs. Keep the nesting of transactions to a minimum and keep the length of transaction time to a minimum also. In an OLTP system, if you are holding on to a lock for more than a couple of seconds, review your code and see if there is a better way to perform the data update.
What part does database design have to play in the performance of a SQL Server-based application?
It takes a very major part. When building a new system, or adding to an existing system, it is crucial that the design is correct. Ensuring that the correct data is captured and is placed in the appropriate tables, that the right relationships exist between the tables, and that data redundancy is eliminated is an ultimate goal when considering performance.
Planning a design should be an iterative process, and constantly reviewed as an application is developed. It is rare, although it should be the point that everyone tries to achieve, when the initial design and system goals are not altered, no matter how slightly. Therefore, a designer has to be on top of this and ensure that the design of the database remains efficient.