Reclaiming data file space | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

Reclaiming data file space

I am in need of reclaiming unused space in a data file. In the database is one main table and several lookup tables. The main table has a primary key identity clustered column. This table also has a text column. The datafile was 79GB. The main table contained 5 million rows. I deleted half the rows, reducing the number of rows to 2.5 million. I ran a shrinkfile and the datafile is now at 68GB. The results of the shrinkfile were the following:
currentsize 8591936
minimumsize 128
usedpages 8206784
estimatedpages 8206784 I am planning on running a bcp export, followed by a truncate table, and then importing the bcp. After importing the bcp back into the table, should I once again run dbcc shrinkfile to reclaim the unused space created (since I should be able to get it lower than 68GB)?

You can rebuild clustered index instead. Don’t you have enough disk space? Better leave enough free space so you can handle data growth without need to expand db files.
Is that dbcc reindex? Or else how do you rebuild a clustered index? If my datafile is at 68GB, will that shrink it to say 50GB? Yes, I am short on disk drive freespace.
dbcc reindex or create clustered index with drop existing. You deleted half of rows so it should save you a lot of space, but I am not sure because of text column. Your text column is probably stored separately and this space may not be compressed after index rebuild. I overlooked text column is mentioned when I answered. For disk space, I mean: do not shrink it up to the size of currently occupied space, leave enough room from growth inside db files.
The table I am performing the BCP operations, will no longer receive any inserts or updates, deletes only. I will still leave some for growth, just for the odd chance we do decide to insert or update at a later date, but should that time come I will expand it first. So getting back to the process, after I import into the truncated table, to further reduce the size of the data file, would you suggest:
1. Rebuilding the clustered index
2. dbcc shrinkfile That process sounds contadictory. But how else do I reclaim some of this 68GB?
What recovery model do you have? Luis Martin
Moderator
SQL-Server-Performance.com One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important
Bertrand Russell
All postings are provided “AS IS” with no warranties for accuracy.
Simple recovery.
I would like to know, how many space has database available to grow. (blue in EM). First run: sp_spaceused @updateusage = ‘TRUE’ Because I can’t understand your numbers. Also, you can shrink database to minimun size (green in EM), but this will cost performance, but let go step by step.
Luis Martin
Moderator
SQL-Server-Performance.com One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important
Bertrand Russell
All postings are provided “AS IS” with no warranties for accuracy.
I ran sp_spaceused @updateusage = ‘TRUE’ and received the following results:
database_size:68709.94 MB
unallocated space:3403.52 MB
reserved:65250288 KB
data:64842016 KB
index_size:201504 KB
unused:206768 KB
Also, what do you mean by blue and green in enterprise manager?

If you use EM, in view menu, taskpad you will find your database with: green = data, blue = reserved space to grow. So in your case 3403.52MB is reserved space to grow (blue). What I see is: you have small reserved space to grow. You can shrink to get 3403.52MB more, but when database, by normal use, need to grow will take at least those 3403. Futhermore, 68709.94 MB include 3403. In fact your database data is about 65GB. Imposible to reduce to 50Mb, without deleting rows, or indexs.
Luis Martin
Moderator
SQL-Server-Performance.com One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important
Bertrand Russell
All postings are provided “AS IS” with no warranties for accuracy.
You are saying it is impossible to reduce to 50GB, even if I bcp export, shrink, then bcp import, when I started with 79GB and deleted half the table?
Not at all. My oppinion was in funtion of your information. Let go to find actual table size. Please, run sp_spaceused ‘main-table’ Luis Martin
Moderator
SQL-Server-Performance.com One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important
Bertrand Russell
All postings are provided “AS IS” with no warranties for accuracy.
The values returned from running sp_spaceused ‘main-table’ are:
rows: 2665072
reserved: 65249392 KB
data: 64841624 KB
index_size: 201176 KB
unused: 206592 KB
Well, as you can see there is 64GB of data, allmost all database.
To confirm if all is Ok, first rebuild cluster index, and run again sp_spaceused ‘table’ to see if there is any change.
Luis Martin
Moderator
SQL-Server-Performance.com One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important
Bertrand Russell
All postings are provided “AS IS” with no warranties for accuracy.
I guess you have kind of lost me. Am I reading between the lines that to shoot for 50GB, I will need to drop and recreate my clustered index? What about the initial approach to bcp export, shrink, then bcp import?
It was not my intention.
I just was thinking about 79GB and after deleted 2.5 there is 64GB, just may be the table was corrupted.
I short, go ahead with bcp export, shrink and bcp import and after tell us what was the result.
Luis Martin
Moderator
SQL-Server-Performance.com One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important
Bertrand Russell
All postings are provided “AS IS” with no warranties for accuracy.
Agree with Luis: bcp out, truncate table, drop indexes, bcp in, then build indexes. After that shrink. As I said because of text column i am not sure if rebuild indexes itself would have the same effect.
]]>