Under peforming server | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Under peforming server

Hi all, First I must say this is both a great site & forum, I’ve learned plenty from here. Anyway onto my problem: I’ve just purchased a brand new server to replace an existing SQL server: Dell Poweredge 1800
2 X Xeon 3.2Ghz – HTT disabled
2 GB RAM
4 X U320 HDD RAID 0 Array. Running Windows 2003 (member server), Dedicated to running SQL 2000. I’m a little disappointed with the results I’m receiving, in particular one job. Converting data from our old Dos application, to our new Windows system (dos database, to SQL database). In case you need to know the conversion process is carried out by a VB6 application designed by our developers. Anyway it turns out that a notebook computer here is able to carry out the conversion process in roughly the same time (3hrs 10 mins), but has a far worse spec: Mobile 1.5ghz processor
256mb RAM
IDE HDD. I’ve read up as many performance tuning articles as I possibly could but have not been able to make a single difference to the new server. I’ve tried the following: Disabling HTT – did speed up the conversion process by around 15 mins.
Setting the degree of parallelism to 1 – Slowed down the conversion process.
Within the Processor controll area setting SQL to only use one processor – again slowed down the process.
Various other things like setting the max worker threads lower. I’ve carried out a performance monitor and the only difference between the two machines is that the notebook mostly has the CPU maxed out, where as the server average’s around 50% for both processors. Please note, I’d like to think the test I’m carring out is fair. I’m using the same data, and conversion app on both machines, and carrying out the testing when the server isn’t in use. Only difference is the OS and the notebooks running MSDE 2000. Any tips on what I should be looking into would be greatly appreciated.
Hi ya, Is the conversion process in both cases run over the network or locally (ie, it isn’t run on the laptop in both cases, with the second being a neetwork connection to the server rather than a local server?) I’d suggest using SQLProfiler to look for any longer running statements and trying to optimise those. I’d expect the server to run at about twice the speed of the laptop, all things being equal. The dual processor on the SQL Server would help throughput more than performance Cheers
Twan
i suspect your laptop is a Pentium M, so the frequency comparison is irrelevent, the Pent.-M is a very good proc for DB apps
The Pentium M 1.5 ghz is basically a 3.2ghz Pentium II processor. I’m curious on your disk array. Why did you choose a four disk RAID 0 array??? Are you sure that’s what you have? What are your disk and memory counters looking like in Performance Monitor? Are you using SQL Server Standard Edition? MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
… also refer to the articles on the home page of this site about capturing PERFMON counters that were mentioned by Derrick. HTH Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Appologies for the delay in posting back, I’ve been on my Christmas break.<br /><br />Anyway Twan, I should have mentioned all tests we’re carried out locally (no network involved), I’m using a local instance of MSDE2k on the notebook.<br /><br />Joechang, Indeed the notebook is a mobile pentium. Thats anoying that SQL runs better on M-pentiums that Xeon’s then, I wish I’d have known this before ordering the new server… I’d have probably still gone with xeon’s though as a notebook would have probably caught fire all the backups/restores that go on [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />derrickleggett, yes I’m 100% sure regarding the disk array, I fitted it myself. The reason for choosing RAID 0 was this:<br /><br />The SQL server is used for our support team to restore customer databases often around 2GB so they can replicate problems/issues. We’ve got four people carrying out this process around four times a day, so backing up restoring SQL databases is the most common task. Data redunancy isn’t a problem, customer data arrives on CDR so is easy to recover, all databases created by our developers are backup up onto a seperate HDD then tape.<br /><br />The Server is running SQL 2000 standard edition, and as previously mentioned the notebook MSDE 2000. <br /><br />Performance monitor figures are much the same on the notebook and server, except the server has much lower disk work queue’s and processor % processing time. The server also looks to be carrying out a little more page faults/sec than the notebook, which again is strange as I’ve tried lowering (dynamically the max memory SQL can grab), so the server often reports 512mb of free RAM.<br /><br />Satya, I’ll have another read of the article, and take another look at the server as soon as I’m back in the office.
quote:Originally posted by DavidB1
In case you need to know the conversion process is carried out by a VB6 application designed by our developers.

I would take a look at what the application actually does and how it fetches the data. Is it by ADO server side cursors and row by row by any chance? We had to host an application similar to this and just fetching each row took about one second since slow ADO server side cursors with row by row processing was used. This didn’t matter how much power you threw on the application in terms of CPU, RAM etc. Bring up a profiler trace and check for cursorfetch statments. If you have a ton of them after each other this might be the issue.
Look at your execution plans and see if you’re getting a LOT of parallelism in your queries. I’m assuming it is from your previous posts. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
]]>