Using parallelism to execute a SQL Server query has its costs. This is because it takes additional overhead to run a query in parallel than to run it serially. But if the benefits of running a query using parallelism is higher than the costs, then using parallelism is a good thing. In most cases this is true, but not in all cases.
As a rule of thumb, if a query can run serially very fast, there is no point in even considering parallelism for the query, as the extra time required to create a query plan using parallelism could be longer than the time it takes to run the query serially.
By default, if the Query Optimizer determines that a query will take less than 5 seconds to execute, parallelism is not considered by SQL Server. This 5-second figure can be modified using the “cost threshold for parallelism” SQL Server option. You can change this value anywhere from 0 to 32767 seconds. So if you set this value to 10, this means that the Query Optimizer won’t consider parallelism for any query that it thinks will take less than 10 seconds to run.
One possibility where a smaller value than 5 seconds might be useful is in cases where SQL Server is acting as a data warehouse and many very complex queries are being run. A lower value will allow the Query Optimizer to use parallelism more often, which can help in some situations.
One possibility where increasing this value may help performance is in situation where you have identified many queries are running with parallelism, and in addition, the CPU utilization is very high, creating a CPU bottleneck. In this case, you may want to consider increasing the default value from 5 seconds to a larger number, thus reducing the number of parallel queries being run, also reducing the overall usage of your server’s CPUs, which may help the overall performance of your server.
You will want to test changes to this default value thoroughly before implementing it on your production servers. And in most cases, other than the exceptions listed above, you should not change this setting. [7.0, 2000, 2005] Updated 8-21-2006
By default, the SQL Server processes run at the same priority as any other applications on a server. In other words, no single application process has a higher priority than any other one when it comes to getting and receiving CPU cycles.
The “priority boost” configuration option allows you to change this. The default value for this option is “0”, which means that the priority of SQL Server processes is the same as all other application processes. If you change this setting to “1”, then SQL Server now has a higher priority than other application processes. In essence, this means that SQL Server has first priority to CPU cycles over other application processes running on the same server. But does this really boost performance of SQL Server?
Let’s look at a couple of scenarios. First, let’s assume your server runs not only SQL Server, but other apps (not recommended for best performance, but a real-world possibility), and that there is plenty of CPU power available. If this is the case, and if you give SQL Server a priority boost, what happens? No much. If there is plenty of CPU power available, a priority boost doesn’t mean much. Sure, SQL Server might gain a few milliseconds here and there as compared to the other applications, but I doubt if you would be able to notice the difference.
Now let’s look at a similar scenario as above, but let’s assume that CPU power is virtually all exhausted. If this is the case, and SQL Server is given a priority boost, sure, SQL Server will now get its work done faster, but only at the cost of slowing down the other applications. If this is what you want, OK. But a better solution would be to boost CPU power on the server, or reduce the server’s load.
But what if SQL Server is running on a dedicated server with no other applications and if there is plenty of excess CPU power available? In this case, boosting the priority will not gain a thing, as there is nothing competing (other than the operating system) for CPU cycles, and besides, there are plenty of extra cycles to go around.
And last of all, if SQL Server is on a dedicated server, and the CPU is maxed out, giving it a priority boost is a zero sum game as parts of the operating system could potentially be negatively affected if you do this. And the gain, if any, will be very little for SQL Server.
As you can see, this option is not worth the effort. In fact, Microsoft has documented several problems related to using this option, which makes this option even less desirable to try. [7.0, 2000, 2005] Updated 8-21-2006
*****
If your server is dedicated to SQL Server 7.0, and the server is running more than one CPU, then consider modifying the “time slice” SQL Server configuration setting. This setting determines the amount of time (in milliseconds) that a user process can run without being terminated by SQL Server. Normally, a user process determines for itself how much CPU time it should use, and schedules itself appropriately. But if the amount of time the process runs exceeds the “time slice” setting, then SQL Server assumes that the process is stuck and then terminates it.
The default “time slice” setting is 100 milliseconds. If you set the “time slice” to low, it can slow down the system because of the extra overhead caused by processes having to reschedule themselves if they are terminated because they take too long. If the “time slice” is too high, it can cause “stuck” user processes to waste CPU time.
One factor that affects how this setting should be set is the speed of the CPUs in the server. If your CPUs run at 400MHz or less, then you should probably not change this setting. If your CPUs run from 400-600MHz, then consider boosting the “time slice” to 200 milliseconds. If your CPUs run over 600MHz, then consider boosting the time slice” to 300 milliseconds. Before using this setting in production, you will want to test your server’s performance before and after the change to see if SQL Server benefited from the change. [7.0] Updated 6-7-2004