How to avoid blocks on a user table | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

How to avoid blocks on a user table

Hello I have a user defined central Table called "user_table". This can be thought as a temporary table, which is all the time there. This table is used for nearly every report in our application. Every user that runs any of these reports is given a unique Handle-ID from the system. And the data (s)he requires, is read from the actual tables in DB and written to this table with the unique Handle-ID. After that, some filter action is applied and the data has been selected directly from this table.
This table is truncated every morning and then following lines are run. DBCC CHECKTABLE (user_table)
DBCC DBREINDEX (user_table)
UPDATE STATISTICS user_table
EXEC SP_RECOMPILE user_table But as the amount of data in the table varies after every insert and delete operation, the statistics that have been updated in the morning are not useful anymore.
I put "update statistics user_table" statement after the INSERT, and tested it on our TEST DB. It seems to be OK at first sight but i dont know whether this is a good solution, as this statement itself could also run slowly. do you have any ideas? Thanks in advance. Senc.
SET AUTO_UPDATE_STATISTICS ON
Also refer
http://www.sql-server-performance.com/sf_block_prevention.asp Madhivanan Failing to plan is Planning to fail
thanks for your fast answer. This option is already set to TRUE.
how about including DBCC UPDATEUSAGE before every select or after every insert. Thanks,
Ram "It is easy to write code for a spec and walk in water, provided, both are freezed…"
Most blocking problems happen because a single process holds locks for an extended period of time, causing a chain of blocked processes, all waiting on other processes for locks. Why you’re giving access to the temporary table for the users, when it is truncated and running the DBCC checks.
Why not run the DBCC options on the base table than the staging table to provide optimum performance. Also make sure the queries are submitted in batches rather than locking the whole table or page from other queries. May be this is the best time to optimize the queries that are running here, take help of PROFILER for long running queries and apply suitable indexes by checking the Execution plan. Following links for your refernce on reducing blocking: http://www.sql-server-performance.com/blocking.asp
http://msdn.microsoft.com/library/en-us/optimsql/odp_tun_1a_4jvy.asp?frame=true Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
do you have clustered index? you may want to reconsider an alternative solution ((1) schedule a query to run instead of real-time execution of all queries, (2) create view where view consisting of multiple tables (UNION) with identitical structure assigned to different department/group of user and uses CHECK constraint to enforce partitioning) and/or other items suggested above. May the Almighty God bless us all!
www.empoweredinformation.com
BTW what is the rowcount on the base table and number of rows imported/truncated on a daily basis? Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
There are a lot of base tables which should be considered. And our users’ main request is that, they want to filter their data after having the main amount of desired data.
Every user is a part of a motor project. They select their project and have their data shown in the grid. It s about 1500-2000 rows. Afterwards, they want to filter this data several times, which should be very fast. Because of that, we have this central user_table. At first, I insert the desired data in this table with some special where clause (i.e. selected project, selected subproject…) insert into user_table (column1, column2, column3, …)
select column1, column2, column3, …
from base_table1, base_table2, …
where project = @project
and subproject = @subproject
and …. = ….
and …. = …..
and if the user sets a filter, i apply this filter to the user_table and update the affected rows as founded. update user_table
set found = 1
where handleid = @user_hdl_id
and exists (select 1 from base_table1 where … criteria1)
and exists (select 1 from base_table2 where … criteria2)
… And after that, i apply a select to this table, joined with the required base tables and with paging ability. (i.e. this query returns max. 100 rows.) select column1, column2, column3, …
from user_table, base_table1, base_table2, …
where handleid = @user_hdl_id
and found = 1
and ….
and … as the user pages, this query is running and it takes about 1-2 secs. And it s not the problem. But the insert and the update processes are sometimes taking too much time because of blocking. There is also a DELETE, when the user makes a change in one of his/her main criterias (like project, subproject).
UPDATE and DELETE statements have both rowlock hints. But sometimes it doesnt help, as i mentioned before. Note: After the user logs out or if session timeout occurs, the date with the corresponding handle-ID will be deleted from this user_table.
Therefore, there can be ca. 40.000 rows in this table at a time. So, that’s my problem. I know this is not a good way of showing this data to user, but as we have approximately 80 filter criterias and this filtering have to happen very fast, we had to do that in this way. But now we are having performance problems.
Can you use the views for the filters that you need the data to display.
If possible avoid performing inserts during the data queries, as they may tend to work in opposite way on the table with lot of indexes. Satya SKJ
Microsoft SQL Server MVP
Contributing Editor & Forums Moderator
http://www.SQL-Server-Performance.Com
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
That would be too much work to change everything. We dont have time for that.
]]>