SQL Server Performance Forum – Threads Archive
128GB of ram in sql 2000?Hi folks, We are considering purchasing a system with 128GB of ram. I know that technically sql 2000 enterprise is documented to support 64G when matched with Windows 2000 datacenter. All the documented limits for sql 2000 enterprise show the memory limit being based on the OS its being used with. Does anybody know if sql 2000 enterprise is limited to 64GB when paired with an OS that can support more than 64GB (such as Windows 2003 Enterprise x64)? Thanks, Steve
Folks, Nevermind, I found the answers to this question. They are: 1) To use sql2000 on an x64 version of windows, you need SP4.
2) There is a bug in SP4 that allows you to only use 50% of your memory if AWE is enabled.
3) Regardless, the maximum supported by sql2000 (non itanium version) is 64GB, even if the OS supports more http://www.microsoft.com/sql/downloads/2000/sp4.mspx
Not directly related, but this might also be interesting:
SQL Server MVP
I really do wish that Microsoft woudl come out with a true 64 bit build for x64…… many months till SQL2005
some times i think you guys just want the biggest baddest toys to play with. regardless of the max memory any given system will support, always count the number of DIMM sockets, and check the price per DIMM
the following is from Crucial for DDR-2100, ECC Registered memory,
1GB $190 if you want to buy a big dog system, go ahead, but i would still recommend populating with the more reasonable price/GB DIMM, so in the above case, probably the 2GB, which has a slight penalty over the 1GB, but not enough to matter, while the 4GB has a large penalty.
Over the next year or two, the 4GB will come down much so than the 2GB, hence if you can wait 1 year for the max capacity, it is cheaper to buy the 2GB now, throw it away next year (I will accept 1GB DIMMs this year, and 2GB next year) and buy the 4GB. other things to watch out for in the full 64-bit SQL Server, be sure to use either stored procs or parameterized queries, otherwise the procedure cache could blow up with SQL statements that will never be reused.
"some times i think you guys just want the biggest baddest toys to play with" Indeed.
SQL-Server-Performance.com One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important
All postings are provided â€œAS ISâ€ with no warranties for accuracy.
And you dont? <img src=’/community/emoticons/emotion-1.gif’ alt=’‘ /><br /><br />
i may want the big toys, but i don’t push memory for the reasons above
The application I support loads firewall and IDS log data in real-time. I am trying to size a machine for a new client. I am estimating that on average this customer will load about 4GB an hour. The 4GB represents data and index pages. The more memory I have the better chance that the process that analysis the data will not have to go to disk as the data is already in memory. Based on my average load estimate with 32GB of memory I can store about 9 hours of data and index pages. Now I have also estimated worst case. Worst case would be about 20GB an hour. At this rate 32GB will only give me 1.6 hours. And yes it is expensive… I am only gogin to start wiht the 32GB of memory as I am sure I will run into another bottlenect first. If I dont then I will go shell out the big bucks for the big memory. Bert
disk systems properly configured can drive a lot IO, 1-2GB/sec is reasonably on the systems we are talking about.
So the real value of memory is to handle small random access, intermediate results, and misc stuff,
large blocks of data that are access sequentially can reside on disk.
i think you will find 32GB more than enough as long as the storage system is sized correctly (for sequential bandwidth & random IO)
BTW, do you routinely pull 2GB/sec to one server out of your san? The reason I ask, is in theory it takes eight 2Gb/sec FC ports to reach that number.<br /><br />In practice I can only get 1Gb/sec out of my 2Gb/sec FC cards. Because of this, I’ve got 3 dual channel cards (QLA2342) (so 6 2Gb/sec FC ports) and I’m only seeing 750 MB/sec peaks. For me to see 2GB/sec I’d have to run 16 FC ports.<br /><br />Is there a common configuration mistake I’m making? Am I missing something?<br /><br />I’m sure the first question is going to be whether I have it set for a 1Gb/sec data rate <img src=’/community/emoticons/emotion-1.gif’ alt=’‘ /> I have them configured for auto data rate, but the actual link speed reported by qlogic’s management tool the is 2Gbps. <br /><br /><br /><br /><br />
in working with EMC Clariion line,
keep in mind there is a 2Gbit/sec link from the server to the SAN (which is really another computer), then another 2Gbit/sec link to the disks when going from server to the SAN cache, i can get 195MB/sec per 2Gb/s FC(1 hop)
when going from server to SAN to disk, 160-170MB/sec (2 hops) supposedly the above is bi-directional, that is, i could have gotten that much in each direction simulataneously, but i have never tested this. On the newer Clariions, it takes about 16 disks to get this from SQL Server,
its possible that a table scan might yield less, ie, 130MB/sec per 2G/s FC, hence your 1Gb/s the older 2P Xeon (with 400/533FSB) systems could drive 1.3GB/sec on disk IO, the Opterons can do about 2.2GB/sec, I am trying to line up the new Xeon with 800MHz FSB on the Intel E7500/8500 chipset to see what IO it can drive.
The big Itanium systems can drive over 4GB/sec IO, but its hard to get those
Thanks for the tips, that helps a bit. So, in order to pull 2.2GB/s out of these servers you have more than 11 FC connections to one server? What percentage of the time are you able to utilize that much bandwidth? I am wondering if I am hurting myself by being under-cabled
it depends on your application, i like to find out both what the system can drive, and what each SQL Server operation can use. 1. the easiest place to use raw band is the database backup, so if you have a multi-TB transactional DB that must be backuped in a tight window, go ahead and configure for massive bandwidth. 2. the table scan can use reasonable bandwidth, i have seen between 300-400MB/sec, but am looking into how this can be pushed up 3. bulk loads use the least, the most i have seen is in the 130-200MB/sec range. so for your app, i would test your most important disk bound queries, then figure out what disk io would drive cpu to 80%+, then configure to that point.
there are probably also good strategies for you to minimize disk io
Hey Joe I am getting a new Dell 6850 for testing next week. If you have any tests you want me to run I will be more that happy. For now I just have the internal disk to test but In July we are Setting It up with a CX700 and can run some test if you woudl like Bert
good post on the subject http://blogs.msdn.com/slavao/archive/2005/06/03/424825.aspx
Bert, we are also upgrading one of our SANs from a CX400 to a 700. I will also be getting some extra FC ports, cards, and disks at that time. We can compare numbers and run some tests. Should be great fun. <img src=’/community/emoticons/emotion-1.gif’ alt=’‘ /><br /><br />MeanOldDBA<br />[email protected]<br /><br />When life gives you a lemon, fire the DBA.
Oh, I also just bought 32 GB for one of my DL 580 servers. Of course, it has 14 instances of SQL Server on it, so it’s not really THAT great of a setup after all. lol MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.