SQL Server Performance

SQL Core i7 Poor Performance

Discussion in 'ALL SQL SERVER QUESTIONS' started by tomandlis, Jan 26, 2012.

  1. tomandlis New Member

    What can explain this poor performance?? I have a SQL script that I run regularly. The script takes 8 hours to run on my brand new Dell Corei7, but runs in just 2.5 hours on a old Core2 Duo. Here are the system specs:

    New System
    OS: Win7 Home Premium 64 Bit
    CPU: 3.4 GHz Core i7-2600
    RAM: 8 GB

    Old System:
    OS: Win7 Ultimate 32 Bit
    CPU: 2.8 GHz Core2 Duo
    RAM: 4 GB (3 GB Usable)
  2. tomandlis New Member

    I don't think the poor performance has anything to do with stats as the script creates a DB from scratch. No stats would exist.
  3. tomandlis New Member

    This is crazy, even my laptop can outperform the Core i7. I've got a cheap Dell Vostro 1520 laptop that can run the script in 2.5 hours! It's running Win 7 32 bit, 2.26 GHz, 4 GB RAM.
  4. tomandlis New Member

    It's not just limited to this one script. I've got another job that I run regularly that takes 6 hours on the Core i7 and 1.5 hours on the Core2 Duo (above).
  5. davidfarr Member

    Which version of SQL Server are you running ? If it's a 32-bit version, running on your Home Premium 64-bit, then you may find the article below of interest;
    http://www.sqlserver-training.com/can-i-run-sql-server-2005-32-bit-on-a-windows-server-64-bit/-
    The relevant points in that article mention that the Microsoft emulation layer (WoW64) can result in some 32-bit SQL Server client tools performing slower, and the increase to 8GB RAM in your new system is of no benefit to a 32-bit SQL Server.

    Some other considerations:
    CPU is a fairly small part of a SQL Server performance puzzle. I would expect that a script to create databases would be disk I/O intensive.
    What kind of hard disk speeds, comparitively, do these machines have ? Are they RAIDed in any way ? Do any systems have SSD's ?
    Are you running anti-virus on your new system ? (The reason I ask about anti-virus is not because I think you have a virus, it's because some very paranoid anti-virus programs perform on-access scanning, essentially double-checking everything that is read or written to disk) . Was any other application running on the new system while the script was executing ? Has the SQL server process priority been boosted in the setting options ? Are you using dynamic memory allocation and default page file sizes ?
  6. tomandlis New Member

    The SQL versions match the system OS. It is SQL 2008 64 Bit Enterprise Edition SP1 on the 64 bit Core i7. All the systems mentioned above are running the standard off the shelf Dell setups (XPS 8300 = core i7, Vostro 1520 = Core 2 Duo). The Core i7 has a nice SATA II hard drive (7200RPM) and the Core2 Duo has a cheap Serial ATA 7200.0 rpm. All are running AVG virus scanner free version. No apps were running on the Core i7 system, but the Core Duo's were running lots of stuff like Outlook, Firefox, WinAmp, Excel, Word, etc. as I do things on them while the script runs. None of the systems have promoted the SQL server process. The script allocates a lot of space, 10 GB, for DB size. The script doesn't modify default page size, but whatever it is on the poor performing Core i7 is likely the same on the high performing Core Duo.
  7. tomandlis New Member

    Bueller? Anyone?
  8. davidfarr Member

    Speaking for myself;
    The nature of a technical forum has an inherent practical limit on how much "hands-on" assistance members are able to give.
    Every hardware upgrade on every server that I have worked with has always resulted in a performance improvement. Therefore, assuming that your hardware is not faulty, I am fairly sure that your problem will eventually be traced to some kind of OS setting, thread priority or parallel execution issue, script syntax optimization problem or SQL Server installation options and config settings.
    If it were my machine and my script, with both in front of me; I would first read through each command within the script to check for syntax that could potentialy affect performance. I would then set various performance counters in place (RAM, paging, CPU, I/O..) and run the script to isolate and identify the longest-running commands with the highest resource demands and/or blocking issues. It is very likely that a "common denominator" will emerge eventually. Of course; this would take some expertise to interpret, and mostly it would take time, regrettably more time than I can give to a single forum thread.
  9. tomandlis New Member

    Thanks. I've timed the operations w/in the proc to the second and across the board all operations take 3-6x times as long on the new system. I agree with you and I'm going to start messing with system settings. Starting with turning off virtualization. If that doesn't work I might have to do performance counters, but isn't there an easier way? I remember using a sql system query that listed bottleneck counts like CXLATCH or something like that...
  10. tomandlis New Member

  11. davidfarr Member

    Thanks for sharing your solution.
    It would seem inconvenient though, to disable hyperthreading on the whole machine, for the sake of SQL Server, when you may have other applications that benefit from hyperthreading.
    As an alternative to a bios change; you can try setting SQL server to single thread query execution:

    EXEC sp_configure 'max degree of parallelism', '1'
    This may be equally effective in improving performance on your system for SQL server, allowing you to re-enable HT in the bios.
  12. tomandlis New Member

    Turns out hyperthreading is not the problem. The performance gain I saw when turning HT off was due to the test script short circuiting itself. I think I've stumbled upon the real reason: IO waits. I ran the scripts below and the end result for my MDF file was 164! It should be less than 20 according to the guru I got this script from.

    --check io

    --get the dbid and fileid from the query below and then use them in the
    -- function below that
    SELECT d.name AS Database_Name,
    d.dbid AS Database_ID,
    f.fileid AS File_ID , f.filename
    FROM sysaltfiles f
    INNER JOIN sysdatabases d
    ON f.dbid = d.dbid
    ORDER BY d.name, f.fileid

    --SELECT * FROM :: fn_virtualfilestats(dbid, fileid)
    SELECT * FROM :: fn_virtualfilestats(16, 1)

    ---MAJOR PROBLEM!!!!
    --if isSTall>20 then there is an IO problem
    SELECT IoStallMS / (NumberReads+NumberWrites) as IsStall
    FROM :: fn_virtualfilestats(16, 1) --1 = mdf usually
    -- RESULT 164!!!
  13. tomandlis New Member

    What could be causing this? Other DBs on the same drive exist, but they are inactive. Might it just be because the problem DB is sharing the same drive with the OS? That 'IsStall' number (164) seems really really high to me: way above the acceptable range. Could sharing with the OS cause this?

    This machine is dedicated to SQL Server. It has only one drive: a brand new nice SATA II hard drive (7200RPM). The only other conflict I can think of on that drive is the LDF for the same DB. Running the analysis from above on the LDF produces a reading of 18 IsStall. That is less than the benchmark 'trouble' level of 20, but not by much.
  14. tomandlis New Member

    Update. I purchased a couple of solid state drives and moved my tempdbs to them. This cut execution time in half as the script uses tempdb extensively. I also found that templog is a performance bottleneck. SSDs are an expensive solution and it just seems like it isn't the core issue because performance, though better, still isn't as good as it was on my old junky system.
  15. tomandlis New Member

    Update, the problem still exists. Despite putting the tempdb on sata SSD the tempdb read/write performance is terrible. My cheap laptop IDE is outperforming my production SSD. When I run the following statement (below) I get a ISSTALL: much higher (54) on the production system vs. my laptop test system (16).

    SELECT IoStallMS / (NumberReads+NumberWrites) as IsStall
    FROM :: fn_virtualfilestats(2, 1) --where 2 is dbid for tempdb and 1 is the mdf file

    Hundreds of millions of records are being written to tempdb and so its read/write performance is crucial. Any guesses why it read/write performance would be so bad?
  16. Luis Martin Moderator

    How many tempdb files do you have?. One suggestion is to have one per processor.

Share This Page