stop an individual query using 100% CPU | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

stop an individual query using 100% CPU

Hi, I’ve report running which uses 100% CPU (single processor), however there are also a number of other query’s which are running at the same time. These query’s are getting killed at the moment (takening 32 secs instead of subseond times). Is there any way to stop an individual query using 100% CPU processing? Regards, linnada
Hi, You can issue a KILL command to kick the query out…only badly designed query’s utilize 100% CPU or if the query is simple ‘select’ returning large resultset. Have check the statistics on query you are running ? Cheers
Sat
As always take help of following articles to resolve the issue:
http://sqljunkies.com/Article/7F8518F9-FDAA-4FF3-8FC5-25E8946C8D0C.scuk
http://www.sql-server-performance.com/forum/topic.asp?ARCHIVE=true&TOPIC_ID=1962
http://support.microsoft.com/kb/224587
http://support.microsoft.com/kb/835864 Satya SKJ
Microsoft SQL Server MVP
Writer, Contributing Editor & Moderator
http://www.SQL-Server-Performance.Com
This posting is provided AS IS with no rights for the sake of knowledge sharing. The greatest discovery of my generation is that a human being can alter his life by altering his attitudes of mind.
Is there a setting that can limit the amount of CPU a single connection can consume in Sql Server 2005? When the CPU on a single connection goes to 100% CPU utilization it noticably impacts performance. Sql Server does a good job of servicing multiple requests when the disk is the bottleneck but it does not do very well when the CPU is the bottleneck.
I know which query is causing the 100% CPU utilization but nothing can be done to address this with the 3rd party database we are using without violating our support agreement.
The best I was able to do was to set the OPTION ( MAXDOP 2) setting to limit the # of CPUs the offending query could use. This way one bad query that I knew was bad and had to run, doesn’t degrade the entire server.
Is there another option?
Thanks,
John Lehew

Why not find out why that query is taking 100% of resources, may be it needed relevant indexes to list out results effectively or even defragment index will help a lot.
If temp tables are used here then ensure to monitor the TEMPDB contention during this process. Using the above links you could find out the DMVs that will return such stats to control the exeucution.

Adding/modifying indexes is not an option as that would violate our support agreement. I’ve asked the question and was specifically told not to go in that direction. I realize that reviewing indexes and tuning the queries is the typical way to resolve this, however, what are some of the other options? I found MAXDOP as one option and setting up another read-only database similar to a reporting server is another yet costly option. Is there a third option?
Thanks,
John

What is the available memory on Server?
Is this a dedicated server for SQL Server?
If you are managin the database then why this should violate the support agreement for having sensible approach of indexes (though it is not a technie based question, but could clarify few doubts [:)]).

It has 4Gb of memory with the 3Gb switch. The database is about 20Gb.
It is a dedicated box with four 3.0Ghz CPUs, not a virtual machine.
We have 35-40 open issues with the product and have 5 tickets that are officially open and we are working. We do not want anything that might give the vendor a way around resolving the issues. If they notice that we reindexed their tables, we could lose credibility and they could discount all the issues we have with them. There are essentially political reasons not to mess with the indexes.

Understood the issues around beating the bush for such agreement with third party tools [:mad:] grrr..
But if you have identified what queries are causing the issues and to fine tune further, it is better to take/escalate the same to the management. Without a proper indexing taking any other action may damp the performance as it is already having such issues. I

All of the data in their system is stored in XML and each XML string is about 25-50K long. There is some indexing but to access it requires creating an XML DOM then executing XPath expressions to pull out the data that’s needed. This is very slow so we are going around their interface and caching their XML data in our own SQL tables so we can index the data efficiently for reporting purposes. Since we are accessing their system without going through their interface we are already doing something that is not supported and they have no plans to support it.
Are there any undocumented features that could limit the amount of resources a connection can take?
Thanks,
John

]]>