When do you know you've outgrown SQL7/NT4? | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

When do you know you’ve outgrown SQL7/NT4?

Hello. We currently have our core CRM application running on SQL 7, and NT4.0. We did a hardware upgrade about a year ago, because users (around 400) were complaining about deathly slow performance. Once the hardware upgrade was complete, the users were very happy with the performance improvements. The database server is now a dual 3GHz (Xeon) Proliant, with 3GB of RAM. Now, after 10 months, users are again experiencing deathly slow performance. Now that that is out of the way, I was just wondering if there were any real signs that we have outgrown SQL7/NT4.0, and need to go to 2K (possibly Enterprise). I would like to bring some real stats to the management team, and be able to concretely tell them that we have outgrown the current OS and SQL revision. Are there performance counters I can look at? Any advice would be much appreciated. Thanks,
Darnell.
the simplest thing to do is probably run the pssdiag kit from Microsoft, then mail the output to some one who knows how to analyze it
http://support.microsoft.com/default.aspx?scid=kb;en-us;830232 brad also has several articles on this site on how to do performance analysis if you are so inclined

I’ve installed severals times SQL 2000 replancing SQL 7.0 with NT4.0.
All process run faster than SQL 7.0. Much better with w2k. (Test and customers oppinions)
One recomendation: if are planing to install SQL 2000 with NT4.0 over existing SQL7.0, I recomend to detach database (or backup, etc), remove SQL 7.0 and install fresh SQL 2000. I never get success with upgrade process. Luis Martin
Moderator
SQL-Server-Performance.com All postings are provided “AS IS” with no warranties for accuracy.
i would highly recommend upgrading the OS to w2k, or even better w2k3
i would have thought that benefiting from S2K over S7 would require tuning to the new capabilities of s2k, but i have never verified this
I have been doing some more digging, and I have noticed that the % Disk Time for the array that the database sits on often goes to 100% and stays there for 10 seconds at a time. This seems to be during reads, not writes. The Current Disk Queue Length doesn’t seem to indicate a problem though, bouncing between 0 and 4 (two physical disks in the array). Our RAID controller has 192MB of cache on it, and is split 50/50 read/write. Any ideas what would cause the % Disk Time to be so high?
How about placement of data and log files on the system?
What was previous memory utilization for the server. How is the server primarily used?
Is memory utilization constant or does it rise to 1.5 GB at certain times.
How much memory total memory is on the server and how much of it is allocated to SQL Server?
Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Its worth running a profile of the sort of queries CRM is running. Im assuming its an off the shelf package, and not an inhouse development. There are a long list of commercial CRM softwares which are notoriously badly written in terms of sql performance
The database and the transaction logs are on two separate arrays. Each array consists on two 146GB disks, in a RAID 1+0 configuration. The array controller is a fairly new Compaq Smart Array 6400, which really hauls. The server is dedicated to SQL. Memory utilization remains constant at 1.8GB. SQL is configured to use RAM dynamically. I am monitoring quite a few things right now, like Full Scans/sec, and Page Splits/sec, but I’m really not sure what numbers are acceptable. The Buffer Cache Hit Ratio is above 99%. With the SQL counter Cache Size (pages), if I multiply the number I am getting by 8K, will this give me the amount of RAM SQL actually needs? If so, this number is very low, around 150MB. There is a lot of suspicion surrounding the CRM application, since it was bought ‘of the shelf’ around 5 years ago. Our developers have done a lot of customizing, but the application has a lot of ‘core’ code that can’t be touched.
Maybe its time to consider an upgrade. If the application code is very poor, moving to sql 2k might help a little bit, but in the end the application will start bogging you down again
The PERFMON stats doesn’t seems to be suspicious in this case and as Chappy refers it may be time to consider upgrade or fine tune from the CRM application. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Its one of the common issues in database envoirnment. A lot of times (about 80% and up), the application tema comes and complain that the server was running fine and not its not. In almost all the cases its the code which was running on a small database (sometimes even without indexes) and do fine. Its only when the database grows bigger, They see an issue. Look at your queries and do statistices IO and maybe look at your joins. A deadly combination of high reads and poor join results into something like this. My recommendation is to look at your queries. If your reads are high, I would do a cache hit ratio, disk reads (physical vs logical). I would recomend a query tunning.
See if SQL Server is doing a lot of I/O. Tools like SQL Profiler and Blocker Script output can tell you if the query(s) are resulting in these large I/Os, and tuning or optimizing these individual queries from a statistics and indexes point of view may help from the disk I/O perspective. An improper index strategy can add to a disk subsystem workload. For example, instead of doing an Index Seek, the SQL Server Optimizer may choose to do a Table/Index Scan, resulting in a lot of unnecessary I/O. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
I would strongly recommend to move away from NT4 since it’s reaching end-of-life at the end of this year and no more hot fixes or security patches will be available. http://support.microsoft.com/gp/lifewin "Windows NT Server 4.0 incident and security related hotfix support is available through 31-Dec-2004. Non-security related hotfix support is available only through a custom support contract after 31-Dec-2003"
The CRM application is Janna (now owned by Siebel). We have done quite a bit of customizing, but there is a lot of core code we can’t touch. The Buffer Cache Hit Ratio still sits at around 99.7%, so we can rule out a memory problem. Both CPU’s sit at around 35-40% all day. There are spikes, but not for any signifigant amount of time. Looks like it’s time to tell the DBA’s (again) that they need to do some more work on their queries. Thanks for all the replies.
DP.
In my experience, severals times with SQL 7.0 SP4, some queries take long time and huge reads.
Once converted to SQL 2000 SP3 all those queries work fine without any additional index. Luis Martin
Moderator
SQL-Server-Performance.com All postings are provided “AS IS” with no warranties for accuracy.
True I agree with the gang to upgrade the OS & SQL to higher version to take advantage of optimization and performance gain. And you can see from Argyle’s reference about NT life support from MS is nearing out. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
I just heard from an HP Engineer who told us that Hyper-Threading should be disabled in the BIOS on our server. He says that this could hurt the performance when running Windows NT4.0/SQL7.0. Anybody have any comments on this, maybe an example that supports his claim?
I agree for HT. Even win2k is not all that intelligent for HT. ONly win2003 is capable of handling complexity of HT. If at all suppport for HT is required, please consider Win2003. This is all I would say at this time. However, Application tunning remains on top of the list for any serious improvements. Just an opinion.
High CPU usage may come from different reasons, to mention just a few I had to deal most often: 1. Cursors
2. Sp recompilations because of temp table usage inside stored procedures
3. Sp recompilations because of inconsistent capitalization of proc names when called from application
4. Sp recompilations because of branching in the code To solve issue #2 mssql 2000 offers table variables and table functions that can replace temp tables and don’t cause sp recompilations. Of course application has to use stored procedures and you have to be allowed to make changes in their code to take advantage of this feature.

]]>