Hi, I have a SQL query with couple of nested queries (query at the bottom for the reference), now the very first time I execute the query on a SQL Server instance, it takes around 50-60 seconds to execute, but there on for every consecutive execution, it barely takes 1 second and I am baffled, because I want it to take its original time for me to do anything about reduction of its original execution time. I want to try various tuning and query modification aspects. I have tried clearing all caches using the commands listed underneath, but they does not help in getting the original execution time. We are using "SQL Server 2012 Enterprise Edition" use <Database_Name> GO CHECKPOINT; GO DBCC FLUSHPROCINDB(<Database_ID>) DBCC FREEPROCCACHE DBCC DROPCLEANBUFFERS DBCC FREESYSTEMCACHE ('ALL') DBCC FREESESSIONCACHE GO Normally anyone would be happy that query is performing great after first execution but my aim is different, I want it to take actual time for every consecutive execution Please help me to resolve the issue mentioned above, any pointer would be great, please let me know if you need more details thanks, Mrinal
Attaching the file with the sql query for the reference, as otherwise it was exceeding the character limit
I have tried rebooting the Sql Server by restarting the service, but that doesn't help, do you think I need to reboot the machine
No. Is normal behavoir to have more time with one query after restarting the service. When SQL reach memory and after first execution, the same query should run faster. According your information, I presume some blocking when you get 50 seconds. First time I try to post something to help with less performance!
Hi, Issue resolved, our database has the auto create and update statistics on, so first time it was slow due to statistics creation and thereon since the optimum plan was generated to it use to be very fast. I am planning to manually update stats to avoid the first time slow execution. Thanks for the suggestion. I know it is strange that someone is asking for slowing down the query thanks, Mrinal
Adding little bit moe for what Luis said , normally the consistency of query execution plan depends on lots of factors , one of them is buffer recycling using either DBCC commands as you sent or server restart as Luis confirmed and in addition it depends on stress powers of a query which determines how many concurrent users can run the same query with the same execution time approximately and this is something not easy and need exceptional efforts to optimize your T-SQL queries for the least IO and CPU cost to be powerful enough to sustain any number of concurrent users with consistent results not like 50 sec sometimes and sometimes 1 sec To simulate that without need to alter database properties for update statistics options , you can use any SQL query stress tool like “SQLQueryStresTool” , you can download it at http://www.sqlserver-performance-tuning.com/apps/webstore/ Also You can read more about this topic of Stress powers under my blogs http://www.sql-server-performance.com/2013/sql-server-index-tuning/ http://www.sql-server-performance.com/2012/sql-server-t-sql-tuning-temp-table-union/ http://www.sql-server-performance.com/2012/sql-tuning-tvf-scalar-functions/ http://www.sql-server-performance.com/2012/sql-server-t-sql-tuning-not-in-and-not-exists/ Let me know if any further helps is needed
Two follow up questions: 1. Currently I am running the Update Statistics on a whole database using sp_updatestats. Does that achieve the same thing? 2. Do you recommend setting the Auto update on. I know it is not recommended for vldb's
Welcome to the forums!. In my opinion, if you can update statistics in maintenance plan, I don't recommend to set auto update on. Furthermore, if you can reindex all your indexes said twice a week, then you don't need to update statistics. BTW: What sql version do you have?