dbcc indexdefrag | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

dbcc indexdefrag

I did a dbcc indexdefrag on this table last friday and then did a dbcc show contig and all indexes showed around 99% scan density and less than 1% logical fragmentation.
This morning, I checked and on 3 indexes (one is single column integer datatype, the other 2 are composite indexes), and the scan density dropped to 15%, logical fragmentation went up to aorund 40%. Is this possible in just 1 weekend? The fillfactor of the indexes has been left to default of 0, is that the reason why?
It could be a factor. It’s hard to answer this question though without knowing how the tables are used. Is there a lot of usage or batch processes that happen over the weekend? Do your indexes get a lot of random updates and inserts? If the answers are yes, then it’s very possible. If this is true though, you should probably look at doing an indexdefrag nightly. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
yes, there’s a nightly data loading and transformation job. and table does get a lot of random updates and deletes.
I will think about rescheduling this more often.
You can run dbreindex instead indexdefrag, specially if you can schedule on non working times. Luis Martin
Moderator
SQL-Server-Performance.com
I run this once a night on all my databases. I run it at 2:00 in the morning. SET QUOTED_IDENTIFIER ON
GO
SET ANSI_NULLS ON
GO ALTER PROCEDURE sp_system_admin_maintenance –Name:sp_system_admin_maintenance
–Purpose: This stored Procedure(SP) will be used for running nightly maintenance on
–all databases. It resides in the master database of each database server.
–1)DBCC DBREINDEX on all tables.
–2)UPDATE STATISTICS on all tables.
–3)Recompile all procedures.
–4)UPDATE USAGE on the database.
–5)DBCC CHECKDB on the database.

–Format:EXEC sp_system_admin_maintenance

–Example:EXEC sp_system_admin_maintenance

–Action:Author:Date:Comments:
———————- –/–/——————————————–
–CreatedDerrick Leggett11/11/2003 Initial Development
–ModifiedDerrick Leggett02/17/2004Added shrinkdatabase logic into maintenance.
— AS SET ARITHABORT ON –Record start time
PRINT ‘START TIME: ‘ + CAST(GETDATE() AS VARCHAR(255)) –Declare needed variables.
DECLARE
@int_error_counter INT,
@int_max INT,
@int_counter INT,
@txt_sql NVARCHAR(4000),
@txt_name NVARCHAR(4000),
@txt_owner NVARCHAR(255),
@txt_db NVARCHAR(255) –Declare table variable to hold table or procedure names.
DECLARE @tbl_names TABLE (
int_id INT IDENTITY(1,1) PRIMARY KEY,
txt_name VARCHAR(255),
txt_owner VARCHAR(255)) –Insert into table all user tables.
INSERT @tbl_names(
txt_name,
txt_owner) SELECT
so.name,
su.name
FROM
sysobjects so
INNER JOIN sysusers su ON so.uid = su.uid
WHERE
so.xtype = ‘U’
AND so.name NOT LIKE ‘dt%’ –Set up loop to run DBCC commands against all user tables.
SELECT
@int_max = (SELECT MAX(int_id) FROM @tbl_names),
@int_counter = (SELECT MIN(int_id) FROM @tbl_names) WHILE @int_counter <= @int_max
BEGIN SELECT @txt_name = (
SELECT ‘[‘ + txt_owner + ‘].[‘ + txt_name + ‘]’
FROM @tbl_names
WHERE int_id = @int_counter) –Reindex all user tables.
SELECT @txt_sql = ‘DBCC DBREINDEX(”’ + @txt_name + ”’)’ PRINT @txt_sql
EXEC ( @txt_sql ) –Update the statistics on all user tables.
SELECT @txt_sql = ‘UPDATE STATISTICS ‘ + @txt_name PRINT @txt_sql
EXEC ( @txt_sql ) SELECT @int_counter = @int_counter + 1
END –Insert into table all procedures.
DELETE @tbl_names INSERT @tbl_names(
txt_name,
txt_owner) SELECT
so.name,
su.name
FROM
sysobjects so
INNER JOIN sysusers su ON so.uid = su.uid
WHERE
so.xtype = ‘P’ –Set up loop to force a recompile of all stored procedures.
SELECT
@int_max = (SELECT MAX(int_id) FROM @tbl_names),
@int_counter = (SELECT MIN(int_id) FROM @tbl_names) WHILE @int_counter <= @int_max
BEGIN SELECT @txt_name = (
SELECT ‘[‘ + txt_owner + ‘].[‘ + txt_name + ‘]’
FROM @tbl_names
WHERE int_id = @int_counter) –Recompile the procedures.
SELECT @txt_sql = ‘EXEC sp_recompile ”’ + @txt_name + ”” PRINT @txt_sql
EXEC ( @txt_sql ) SELECT @int_counter = @int_counter + 1
END SELECT @txt_db = DB_NAME() –Update the usage on the database.
PRINT ‘DBCC UPDATEUSAGE (‘ + @txt_db + ‘) WITH NO_INFOMSGS’
DBCC UPDATEUSAGE (@txt_db) WITH NO_INFOMSGS –Update the usage on the database.
PRINT ‘DBCC CHECKDB (‘ + @txt_db + ‘) WITH NO_INFOMSGS’
DBCC CHECKDB (@txt_db) WITH NO_INFOMSGS –Record start time of checkpoint..
PRINT ‘CHECKPOINT START TIME: ‘ + CAST(GETDATE() AS VARCHAR(255)) PRINT ‘CHECKPOINT’
CHECKPOINT –Record end time of checkpoint..
PRINT ‘CHECKPOINT END TIME: ‘ + CAST(GETDATE() AS VARCHAR(255)) –Set the @int_error_counter to 0. Will attempt to shrink database maximum number of times if an error occurs on the shrinkdatabase.
SELECT @int_error_counter = 0 SHRINK_DATABASE: — –Record start time of shrinkdatabase.
— PRINT ‘SHRINKDATABASE START TIME: ‘ + CAST(GETDATE() AS VARCHAR(255))

— –Shrink the database.
— PRINT ‘DBCC SHRINKDATABASE (‘ + @txt_db + ‘)’
— DBCC SHRINKDATABASE (@txt_db)

— IF @@ERROR <> 0
— BEGIN

— SELECT @int_error_counter = @int_error_counter + 1
— IF @int_error_counter <= 3
— BEGIN
— GOTO SHRINK_DATABASE
— END
— END

— –Record end time of shrinkdatabase..
— PRINT ‘SHRINKDATABASE END TIME: ‘ + CAST(GETDATE() AS VARCHAR(255)) –Record end time.
PRINT ‘END TIME: ‘ + CAST(GETDATE() AS VARCHAR(255)) GO
SET QUOTED_IDENTIFIER OFF
GO
SET ANSI_NULLS ON
GO The procedure sits in the master databases. I have a job that runs it on each database. The first set of a job overwrites results to a file. The other steps append the results to a file. I can search and parse the file each day to search for errors, look at runtimes, etc. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Nice job Derrick.
Only one observation, RBINDEX also update statistics, so if you run RBINDEX for all tables, it’s no neccesary to UPDATE STATISCTIC.
May be you don´t believe that (I don’t 3 month ago) but if you look for olds post there is an explanation. (I think this came from Twan).
Luis Martin
Moderator
SQL-Server-Performance.com
That’s a very nice script. I will definitely save it. We have a very similar nightly job just like yours, except we don’t do dbcc reindex.
We can’t do a DBCC Reindex because it’s a e-commerce site so if I do a reindex, doesn’t it hold locks on the tables while it’s rebuilding the index? which is why I had to use dbcc indexdefrag.
Anybody maintains a database that is being accessed 24×7? If so, what is your plan for index defragmentation?

All people using 24 x 7 use indexdfrag instead rbindex. You are wrigth about blocking. Luis Martin
Moderator
SQL-Server-Performance.com
ok, maybe I’m understanding this wrong. I can’t do a dbcc dbreindex on my production server coz it’s 24×7, but I can do it on my reporting server. I did this on my reporting server on a table:
dbcc dbreindex(tablename, ”, 80) and I thought this would give me a 80% fillfactor.
So after this is done, and if I do a dbcc showcontig, I’m expecting to see 80% scan density, but instead I see 99%.
Am I understanding this wrong?

Change the DBCC DBREINDEX on the script to DBCC INDEXDEFRAG and it can also work for 24×7 shops. The other thing you can do is put a day test in the script, which I need to put back in for the shrinkdb. You can test for day of week and only run certain parts on that day. For example, only do the shrinkdb on Saturday. Just some ideas if you are in a mood to write scripts. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Yes, you are understanding it wrong. 80% is your fillfactor. In other words, when it rebuilds your indexes, it will leave 20% open for new pages. The 80% that it fills is tightly packed and reindexed to 99%.
Just an observation, but if you are able to run that script every night on your databases and it completes in a reasonable amount of time, then you probably don’t need it. I run a dbreindex that takes 9 hours to complete on Saturday nights. It spends 4 of those hours on one table alone. If we miss this reindex, we are SOL in terms of performance for the week, but doing it once a week helps performance for the entire week.
Valid observation Quentin, I also run once a week for all tables.
Also check:
http://www.sql-server-performance.com/forum/topic.asp?TOPIC_ID=444 You can find sp to reindex acording % of fragmentation. It´s very usefull because only reindex when it is neccesary.
Luis Martin
Moderator
SQL-Server-Performance.com
That’s probably valid for your shop. Our shop runs anywhere from 400-1000 t/sec on the database for about 10 hours a day. The DBCC DBREINDEX takes about 2 hours on our site if we do it once a day. The difference we see in performance by doing it every day is pretty incredible. We have tried weekly and bi-weekly already. Also, we found that it didn’t take much longer to just do all the indexes. I had a script before that tested for amount of fragmentation. Each DBA needs to find the best performance techniques for their shop. The last place I worked at, we did it once a week, using the test for fragmentation. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
Hope I’m not asking an overly simple question. How would one measure improvement in performance since index defragmentation is done. I guess if I have to prove to someone that index defragmentation does improve performance, what statistics can I collect? Do I pick a few top most ran transactions? Performance counters?
Difficult to answer.
Defragmentation is part of maitenance plan, so I gess all performace task i.e: new indexs, was done.
If you don’t run defrag, your users will note soon. If there is no users messages like: What’s goin on today with this software!!, then you are working in the wright way. After rebuild I suggest to ask few users about what they feel. (Don’t expect to much, users came to you when they have a problem, but no when they are happy).
Luis Martin
Moderator
SQL-Server-Performance.com
If you have profiler running to detect long running queries and you have performance monitor on to monitor processor performance, you should see an improvement in both if you had indexed that were badly defragmented and are used a lot throughout the day. Other than that, the customer experience is everything as Luis was pointing out. MeanOldDBA
[email protected] When life gives you a lemon, fire the DBA.
ITs better to get a window for execute of DBREINDEX eventhough the application is 24/7, by all means you can avoid the performance slowdown and can adopt the same solution in future it strikes again. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
Derrick, this is a completely awesome proc. And I have a question – HOW do you schedule this to run all all databases? I’ve been banging my head on this, as has the DBA of our hosting company. The only way I can figure it out is to pass a @dbname into the procedure. And that creates a problem with UPDATE STATISTICS. Any tips or guidance you can give me would be greatly appreciated. Lars
— Pretending to be a DBA out of Necessity
I think Derrick schedule one job for each database, but I’m not a developer so, wait for Derrick to confirm.
Luis Martin
Moderator
SQL-Server-Performance.com The views expressed here are those of the author and no one else. There are no warranties as to the reliability or accuracy of anything presented here.
Luis, thanks, and that’s what I thought, too. But calling the proc from master always changes the database, so it always goes to master. To isolate the problem when running it from different databases: Create a procedure in master as:
CREATE PROCEDURE [dbo].[p_TestDB] AS
PRINT ‘inside proc db is ‘ + DB_NAME()
RETURN
GO Then, from a SQA session:
USE myDB– any other database, which doesn’t matter
GO
PRINT ‘Now in ‘ + DB_NAME()
EXEC master.dbo.p_TestDB
PRINT ‘After in ‘ + DB_NAME() Execute the SQL above, and the output will be:
Now in myDB
inside proc db is master
After in myDB In my "copy" of the procedure, the query
SELECT so.name,
su.name
FROM sysobjects so
INNER JOIN sysusers su ON so.uid = su.uid
WHERE so.xtype = ‘U’
*should* be returning the tables in the USE database; instead it’s only returning the tables in master. And you can’t change a database inside a stored procedure. So, what’s happening, and how do I do this? Quite puzzling. Lars
— Pretending to be a DBA out of Necessity

Sometimes, you need to perform the same actions for all databases. You can make cursor for this purpose, but you can also use sp_MSforeachdb stored procedure in this case. You can use this stored procedure to check all databases with DBCC CHECKDB statement: EXEC sp_MSforeachdb @command1="print ‘?’ DBCC CHECKDB (‘?’)" (undocumented SP) As a normal you can take help of DB maintenance wizard to affect individual plan for each database. Satya SKJ
Moderator
http://www.SQL-Server-Performance.Com/forum
This posting is provided “AS IS” with no rights for the sake of knowledge sharing.
]]>