SQL Server Performance

thousands of simultaneous users -- what do you do?

Discussion in 'SQL Server 2008 General Developer Questions' started by oscarius, Feb 13, 2011.

  1. oscarius New Member

    folks, i am not a db guy altogether, but I am a developer, and someone had asked me that question, and i got interested in the resolution.
    Lets say you have 50K users that hit the same stored procedure at the same time. The stored procedure is computation intensive and goes out to lunch for a bunch of seconds for each caller.
    Am i correct in thinking that the sql server has a certain number of threads that these calls to the sproc are made on -- so if the thread pool size is 100 threads, it means the first 100 callers are entering the procedure, and the rest are blocking?
    for the sake of argument, lets say there are no record locks inside that sproc -- its all complicated joins and calculations -- what can be done in this case? as a developer, i started thinking that perhaps i should have multiple copies of that database so that i can partition the incoming callers in some round robin fashion? but even if i have 100 databases that are all copies of each other, the number of users/callers is still by far larger than the number of threads available to do the work?
    From all the reading i have done, the suggestions seem to be of "tuning" nature -- meaning i can improve this or that --- but what i need is something "architectural" that will show me how to truly scale databases. For example, calling .NET dlls from the sproc will dramatically improve the performance -- but what if the number of users goes up to 100K? 500K? Any white papers, books you can suggest? i am truly looking for the way the "big boys" do it -- i need to know that as the number of users goes up and up, i can spend the money, buy new servers, do some DB magic and have a truly scalable solution. thank you
  2. Flexdog New Member

    '..Out to lunch ..' & computation intensive would imply io&cpu bound, along with assumption that the sproc is optimized.
    Scale db up:
    *more memory for processing & pinned data
    *raid 10 SLC drives
    *SAN with dedicated processor per object
    Scale db out:
    *Distributed stand alone
    *Big Table (google)
    db intermediate sets:
    *refactor and cache intermediate results (non-dynamic parts)
    *hash/remap long key into int lookup
    App/Logic layer:
    *refactor sproc putting cpu & memory away from db server
    *load balance servers (router level if needed)
    Cynical part - 500K; sproc is optimized and set-based code ;)

  3. satya Moderator

    Welcome to the forums.
    SQL Server will manage connections using concurrent connections and majority of connections threads are handled by SQL Server memory and CPU. As referred above the server configuration plays important role such as (minimum 8 dual core CPUs) and not the size of queue. With a thorough testing you can set the default max workder thread - http://msdn.microsoft.com/en-us/library/ms187024.aspx link.
    Also when it is called from a .NET dlls then connection pooling can help to some extent, http://msdn.microsoft.com/en-us/library/8xx3tyca(v=vs.80).aspx for reference and also see http://www.qdpma.com/SystemArchitecture/HighCallVolumeSQLonNUMA.html on NUMA systems information.
    SCALABILITY is the buzz word for you to concentrate in specific to SQL Server to handle such a high number, we have a 4-node cluster that handles around 40000 connections at any time without any issues and CPU, Memory & DISK are manageable with regular monitoring & alerting in place. On that topic - http://www.sql-server-performance.com/articles/clustering/massive_scalability_p1.aspx and there is a trade-off between VIRTUAL and PHYSICAL server of SQL instance where you are handling such a sheer size of connections.
    SQL Server 2008 R2 can offer such a scalability using internal features see Performance and Scalability White Paper.
    Hope this helps.

Share This Page