SQL Server Performance

128GB of ram in sql 2000?

Discussion in 'Performance Tuning for Hardware Configurations' started by Slart, Jun 2, 2005.

  1. Slart New Member

    Hi folks,

    We are considering purchasing a system with 128GB of ram.

    I know that technically sql 2000 enterprise is documented to support 64G when matched with Windows 2000 datacenter.

    All the documented limits for sql 2000 enterprise show the memory limit being based on the OS its being used with.

    Does anybody know if sql 2000 enterprise is limited to 64GB when paired with an OS that can support more than 64GB (such as Windows 2003 Enterprise x64)?

    Thanks,

    Steve

  2. Slart New Member

  3. FrankKalis Moderator

  4. bertcord New Member

    I really do wish that Microsoft woudl come out with a true 64 bit build for x64...... many months till SQL2005
  5. joechang New Member

    some times i think you guys just want the biggest baddest toys to play with.

    regardless of the max memory any given system will support, always count the number of DIMM sockets, and check the price per DIMM
    the following is from Crucial for DDR-2100, ECC Registered memory,
    4GB $3196
    2GB $451
    1GB $190

    if you want to buy a big dog system, go ahead, but i would still recommend populating with the more reasonable price/GB DIMM, so in the above case, probably the 2GB, which has a slight penalty over the 1GB, but not enough to matter, while the 4GB has a large penalty.
    Over the next year or two, the 4GB will come down much so than the 2GB, hence if you can wait 1 year for the max capacity, it is cheaper to buy the 2GB now, throw it away next year (I will accept 1GB DIMMs this year, and 2GB next year) and buy the 4GB.

    other things to watch out for in the full 64-bit SQL Server, be sure to use either stored procs or parameterized queries, otherwise the procedure cache could blow up with SQL statements that will never be reused.

  6. Luis Martin Moderator

    "some times i think you guys just want the biggest baddest toys to play with"

    Indeed.


    Luis Martin
    Moderator
    SQL-Server-Performance.com

    One of the symptoms of an approaching nervous breakdown is the belief that one's work is terribly important
    Bertrand Russell


    All postings are provided “AS IS” with no warranties for accuracy.



  7. Slart New Member

    And you dont? <img src='/community/emoticons/emotion-1.gif' alt=':)' /><br /><br />
  8. joechang New Member

    i may want the big toys, but i don't push memory for the reasons above
  9. bertcord New Member

    The application I support loads firewall and IDS log data in real-time. I am trying to size a machine for a new client.

    I am estimating that on average this customer will load about 4GB an hour. The 4GB represents data and index pages. The more memory I have the better chance that the process that analysis the data will not have to go to disk as the data is already in memory.

    Based on my average load estimate with 32GB of memory I can store about 9 hours of data and index pages.

    Now I have also estimated worst case. Worst case would be about 20GB an hour. At this rate 32GB will only give me 1.6 hours. And yes it is expensive... I am only gogin to start wiht the 32GB of memory as I am sure I will run into another bottlenect first. If I dont then I will go shell out the big bucks for the big memory.

    Bert


  10. joechang New Member

    disk systems properly configured can drive a lot IO, 1-2GB/sec is reasonably on the systems we are talking about.
    So the real value of memory is to handle small random access, intermediate results, and misc stuff,
    large blocks of data that are access sequentially can reside on disk.
    i think you will find 32GB more than enough as long as the storage system is sized correctly (for sequential bandwidth & random IO)
  11. Slart New Member

    BTW, do you routinely pull 2GB/sec to one server out of your san? The reason I ask, is in theory it takes eight 2Gb/sec FC ports to reach that number.<br /><br />In practice I can only get 1Gb/sec out of my 2Gb/sec FC cards. Because of this, I've got 3 dual channel cards (QLA2342) (so 6 2Gb/sec FC ports) and I'm only seeing 750 MB/sec peaks. For me to see 2GB/sec I'd have to run 16 FC ports.<br /><br />Is there a common configuration mistake I'm making? Am I missing something?<br /><br />I'm sure the first question is going to be whether I have it set for a 1Gb/sec data rate <img src='/community/emoticons/emotion-1.gif' alt=':)' /> I have them configured for auto data rate, but the actual link speed reported by qlogic's management tool the is 2Gbps. <br /><br /><br /><br /><br />
  12. joechang New Member

    in working with EMC Clariion line,
    keep in mind there is a 2Gbit/sec link from the server to the SAN (which is really another computer), then another 2Gbit/sec link to the disks

    when going from server to the SAN cache, i can get 195MB/sec per 2Gb/s FC(1 hop)
    when going from server to SAN to disk, 160-170MB/sec (2 hops)

    supposedly the above is bi-directional, that is, i could have gotten that much in each direction simulataneously, but i have never tested this.

    On the newer Clariions, it takes about 16 disks to get this from SQL Server,
    its possible that a table scan might yield less, ie, 130MB/sec per 2G/s FC, hence your 1Gb/s

    the older 2P Xeon (with 400/533FSB) systems could drive 1.3GB/sec on disk IO, the Opterons can do about 2.2GB/sec, I am trying to line up the new Xeon with 800MHz FSB on the Intel E7500/8500 chipset to see what IO it can drive.
    The big Itanium systems can drive over 4GB/sec IO, but its hard to get those
  13. Slart New Member

    Thanks for the tips, that helps a bit. So, in order to pull 2.2GB/s out of these servers you have more than 11 FC connections to one server?

    What percentage of the time are you able to utilize that much bandwidth? I am wondering if I am hurting myself by being under-cabled
  14. joechang New Member

    it depends on your application, i like to find out both what the system can drive, and what each SQL Server operation can use.

    1. the easiest place to use raw band is the database backup, so if you have a multi-TB transactional DB that must be backuped in a tight window, go ahead and configure for massive bandwidth.

    2. the table scan can use reasonable bandwidth, i have seen between 300-400MB/sec, but am looking into how this can be pushed up

    3. bulk loads use the least, the most i have seen is in the 130-200MB/sec range.

    so for your app, i would test your most important disk bound queries, then figure out what disk io would drive cpu to 80%+, then configure to that point.
    there are probably also good strategies for you to minimize disk io
  15. bertcord New Member

    Hey Joe I am getting a new Dell 6850 for testing next week. If you have any tests you want me to run I will be more that happy. For now I just have the internal disk to test but In July we are Setting It up with a CX700 and can run some test if you woudl like

    Bert
  16. bertcord New Member

  17. derrickleggett New Member

    Bert, we are also upgrading one of our SANs from a CX400 to a 700. I will also be getting some extra FC ports, cards, and disks at that time. We can compare numbers and run some tests. Should be great fun. <img src='/community/emoticons/emotion-1.gif' alt=':)' /><br /><br />MeanOldDBA<br />derrickleggett@hotmail.com<br /><br />When life gives you a lemon, fire the DBA.
  18. derrickleggett New Member

    Oh, I also just bought 32 GB for one of my DL 580 servers. Of course, it has 14 instances of SQL Server on it, so it's not really THAT great of a setup after all. lol

    MeanOldDBA
    derrickleggett@hotmail.com

    When life gives you a lemon, fire the DBA.

Share This Page