Advice request | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Advice request

I need help trying to figure out the best way to configure my db servers to handle the coming workload. Here is my situation; current load consists of one ERP and several smaller production applications. This in turn, consists of several production databases with approximately 250 users, total mdf space is 80 GB. Server is 4 processor Xeon 3 GHz with 24 GB RAM. Now I have additional software packages by different departments being purchased that will require SQL Server and these databases will be heavily utilized/abused. Add another 250 users. Then add in all of the reports that will be hitting this system and suffice it to say that the current server will not handle the load that is pending. Forecasted growth for the new system is approximately 35 GB per year, which in turns consists of 150 – 175 production databases. There are no plans to turn to data warehousing, ever. Software:
Windows 2003 Enterprise Edition
SQL Server 2000 Enterprise Edition I am considering two separate hardware options.
First option is a really big box. Keeping in mind that this is a Dell house and unless I can make a really strong case, it shall remain a Dell house. I am looking at 4 Itanium processors with 32GB RAM and as much HD space as I can possibly get. My concern is that for every major application, I am going to have to spec out a new box. Second option consists of multiple servers, four in total, with a SAN on the backend. I was thinking that each server would have dual 3 GHz Xeon, expandable to 4, with 32GB RAM. I am considering an active/active/active/passive cluster. Which makes the most sense to everyone? More importantly, can I have multiple installations of SQL Server hitting a common SAN? Reading in Romanth Nirmal#%92s article titled SQL Server Clustering: 2000 to 2005 that in a cluster only one instance of SQL Server can utilize the physical hard drives at any point in time. If I am reading this correctly, I need to have a separate set of drives for every environment. Does this include the backup drive or can that be a shared logical partition? Any assistance/advice/pointers/questions are appreciated. Thanks, Robert
Thanks, Robert
i would not recommend multiple servers connected to a single SAN,
there are just too many cases of horrible disk performance in SAN hosting storage for multiple servers,
isolated servers are more predictable than a single big server hosting multiple databases
I don’t have a problem with a good SAN hosting multiple database servers. Any with high IO requirements though should have dedicated drives (physical drives, not just LUNs) with a good RAID setup. Use RAID 10 if you have to. DO NOT share physical drives for multiple servers with high IO requirements. You will regret it, regardless of what some genius SAN engineer tells you. What type of SAN are you looking at? Is a lot currently running on it? Do you know what the utilization currently is at? MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Joe, Thanks for the information. Though I do have a few points that I would like to expand upon. The SAN that I am looking at using would be strictly for a database server use with multiple physical RAIDs for each production environment. To be quite honest, I am reluctant to go to a SAN as I, nor has anyone here, ever worked with a SAN, but it appears to be the best solution for future expansion. I considered Federated servers with partitioned views, but I don’t know the code for applications that we have so I cannot guarantee that the data will be stored correctly. With regards to the one big server having multiple databases, which is my only choice as the second application, an engineering/drafting solution, is designed to have a separate database for every project that we work on, and we do between a 100 to 150 projects per year. After a couple of years I will have several hundred databases for this one system and I am concerned about disk space and performance. Currently our ERP has 9 production databases, along with one testing/training database. They aren#%92t large, but they are heavily hit, both read and write, and I am moving to SQL Server Reporting Services to offload this production server, but it is only in development at this stage. In one of your other responses to different thread, you mentioned SAS, is this a possible solution for me?
Thanks, Robert
Derrick,<br /><br />I am considering the CX500 from Dell with physical RAID arrays for each production environment, i.e. ERP on one set, Engineering/Drafting application on the other, and remaining production databases on the third set.<br /><br />I use performance monitor periodically throughout the day to get an idea as to the usage, Page Faults/sec, Pages/sec and the processor % for all of my processors, neither exact nor recorded, but a general idea. Recently added an additional 16GB of RAM which helped the page faults come down from the 13 – 21k/sec to the occasional spike to 4k with the average of 100. Unfortunately, I am the only DBA here so getting a lot of the, would really like to have, information is impossible or rather it is impossible to spend time researching exactly what I am looking at. I know, I know, it is critical to have that information just tell me how to explain that to the users. [<img src=’/community/emoticons/emotion-1.gif’ alt=’:)‘ />]<br /><br />Any suggestions?<br /><br /><br />Thanks,<br /><br />Robert
It’s important you have the information if you’re going to invest the kind of money to buy a CX500 from Dell. This is basically a rebranded EMC SAN. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
I would love to get the information, but we don’t have the new application yet. I can pull the information for the current system which will help, but I am currently unsure of what specs I should be tracking and what exactly they would mean to me. Hopefully, Kalen Delaney’s book Inside SQL Server 2000 will shed some light. By the way, what do you think of the CX500? Thanks for your help. Thanks, Robert
The CX500 is alright. It’s definitely better than the older series. There’s a LOT more cache and the processors are better. We had a lot of disk failures with an old CX300 series SAN, which we haven’t seen so far since we upgraded. In addition, the overall performance and capacity is far better. Overall, it’s a good SAN. It’s far better than an MSA series (HP) and not nearly as good as a Symmetrix type array. To get an idea of the counters we would need to help you more, look at this post: http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=10759 MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
OK, I have some specifications along with the drive configuration Two Perc 4/DC controllers
Controller 0
C: Consists of OS RAID 1
D: Consists of Logs RAID 1 Controller 1
E: Consists of Data RAID 5
F: Consists of Backups RAID 5 Cache Hit Ratio
Average: 71.287
Maximum: 71.288 Buffer Cache Hit Ratio
Average: 99.863
Maximum: 99.882 Processor
Average: 3.5
Maximum: 21.876 Disk Transfers/Sec
D:
Average: 1.687
Maximum: 16.0
E:
Average: 7.206
Maximum: 261.125
F:
Average: 0.707
Maximum: 25.001 Disk Writes/Sec
D:
Average: 22.058
Maximum: 569.015
E:
Average: 0.626
Maximum: 7.0
F:
Average and Maximum: 0 Disk Reads/Sec
D:
Average and Maximum: 0
E:
Average: 2.424
Maximum: 155.011
F:
Average: 0.02
Maximum: 2.0 Page Faults/Sec
Average: 117.157
Maximum: 3216.129 Does this provide you with all the necessary information? If not let me know and I will delve a little deeper. Now to read up on exactly what all of these numbers mean. Thanks, Robert
We need the disk read and write queue lengths for each disk. Also, since your cache hit ratio is so low, show us the detail counters for it. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
OK, here goes. Current Disc Queue Length
D:
Average & Maximum: 0
E:
Average: 0
Maximum: 2
F:
Average & Maximum: 0 Average Disk Write Queue Length
D:
Average: 0.005
Maximum: 0.081
E:
Average: 0.002
Maximum: 0.101
F:
Average: 0.000
Maximum: 0.000 Average Disk Read Queue Length
D:
Average: 0.000
Maximum: 0.000
E:
Average: 0.003
Maximum: 0.149
F:
Average: 0.000
Maximum: 0.000
Cache Hit Ratio – Detailed
Adhoc Sql Plans
Average: 99.023
Maximum: 99.023
Cursors
Average: 97.286
Maximum: 97.286
Execution Contexts
Average: 39.215
Maximum: 39.218
Misc. Normalized Trees
Average: 94.707
Maximum: 94.707
Prepared SQL Plans
Average: 99.937
Maximum: 99.937
Procedure Plans
Average: 98.769
Maximum: 98.769
Replication Procedure Plans
Average: 0.000
Maximum: 0.000
Trigger Plans
Average: 99.710
Maximum: 99.710 Are these values percentages or absolute values? Thanks, Robert
Derrick, I am trying to find out more information regarding one of the detailed Cache Hit Ratio percentages, Execution Contexts, but other than the very short explanation on the Performance Monitor I am unable to find anything that will help to identify exactly what this is. Any suggestions? Are the Read/Write queue lengths anything to worry about? I also pulled the Disk Transfers/Sec.
D:
Average: 1.687
Maximum: 16.0
E:
Average: 7.206
Maximum: 261.125
F:
Average: 0.707
Maximum: 25.001 Are these values within the normal range? Appreciate the help. Thanks, Robert
BOL defines:
Buffer Cache Hit Ratio –
Percentage of pages found in the buffer cache without having to read from disk. The ratio is the total number of cache hits divided by the total number of cache lookups since an instance of SQL Server was started. After a long period of time, the ratio moves very little. Because reading from the cache is much less expensive than reading from disk, you want this ratio to be high. Generally, you can increase the buffer cache hit ratio by increasing the amount of memory available to SQL Server. Also refer tohttp://www.sqldev.net for more information on these counters informatino. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Satya, Thanks for the information. I already found that general information and all of my detailed counters are over 94 percent, except Execution Contexts which is at 39 percent. The memory allocated to SQL Server is 21GB and I have no more room in the server, so I am limited in what I can do. I was hoping to find out what each of these detailed counters means so that I can determine if offloading the reporting to a different server will make that big of a difference or is there a better solution. I will take a look at Gert’s site and see if he has anything on these detailed counters or possibly point me in the right direction. Thanks for the help. Thanks, Robert
Sure, you can find relevant information about these counters from Gert’s site as it helped a lot many times. BTW how often you are reindexing the tables and recompiling the plans. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
I reindex the tables on a monthly basis and I have never forced a recompilation as it is supposed to be automatically done by the engine. Is it worth it to force a recompilation on a scheduled basis for the triggers and procedures? Thanks, Robert
Yes using occasional recompiling will have definete results and also think about update statistics intermittently for optimum performance. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Satya, Thank you for the information. I will have to update my maintenance scripts to include the recompilation and the updating of the statistics. Do you happen to know of where I can find the SQL Server capacities for both the Enterprise and Standard Editions when it is running on Windows 2003 Enterprise Server? I looked on BOL, but it only has information for Windows 2000 Server, I tried MSDN, Microsoft and anything pulled up by Google, but they were all for 2000.
Thanks, Robert
I beleive there wouldn’t be any change when running any SQL edition on Windows 2003 or 2000.
The only change you can see is between SQL Server Enterprise and Standard editions. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Hi Satya, Sorry for the delay in getting back to you. There must be a difference as we are currently using 24GB RAM with W2K3 Advanced server running Enterprise Edition and with W2K Advanced there is a limit of 8GB RAM. I just cannot remember where I saw the specifications. Thanks, Robert
Definetly Winwk3 has upperhand on managing system resources as compared to Win2k Adv, provided the boot setting and memory has been allocated as per KBAs. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
]]>