VARCHAR or CHAR | SQL Server Performance Forums

SQL Server Performance Forum – Threads Archive

VARCHAR or CHAR

I am running SQL 2000 SP4.<br /><br />Here’s my table:<br />CREATE TABLE [BIGHUH] (<br />[LOGIN_ID] [char] (10) NULL ,<br />[NAME] [char] (40) NULL ,<br />[COVERAGE_PATH] [char] (<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> NULL ,<br />[COVERAGE_PATH_NAME] [char] (40) NULL ,<br />[SN1] [char] (<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> NULL ,<br />[SN1_NAME] [char] (40) NULL ,<br />[SN2] [char] (<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> NULL ,<br />[SN2_NAME] [char] (40) NULL ,<br />….. 3-59 repeating<br />[SN60] [char] (<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> NULL ,<br />[SN60_NAME] [char] (40) NULL ,<br />[DIRECT_AGENT_SKILL] [char] (<img src=’/community/emoticons/emotion-11.gif’ alt=’8)’ /> NULL ,<br />[DIRECT_AGENT_SKILL_NAME] [char] (40) NULL ,<br />[GROUP_SET] [smallint] NULL ,<br />[COL_DAT] [char] (10) NULL <br />)<br /><br />Maybe I have things completely wrong here, but wouldn’t the table be smaller if all of the ‘char’s where converted to ‘varchar’s? I say this because most of the time they are going to be blank when you get past SN5 and above. I always thought that varchar was like the run-length encoding in a jpeg file. If the data is all blank (or one color for a jpeg), then the file will be smaller. If the data is always full (or multiple colors for a jpeg), then the file will be bigger.<br /><br />There is about 2 million rows in the table with no indexes and no statistics. The table is about 7.5gb. I converted SN50*-SN60* to varchar and the file grew by about 90k. I tried to UPDATE STATISTICS, DBCC UPDATEUSAGE and even tried to put a clustered index on the table and the size stayed the same. <br /><br />Do I need to run an LTRIM(RTRIM()) on all of the columns that I have converted?<br /><br />What am I missing here?<br /><br />Live to Throw<br />Throw to Live
The char is a fixed-length character data type, the varchar is a variable-length character data type. Because char is a fixed-length data type, the storage size of the char value is equal to the maximum size for this column. Because varchar is a variable-length data type, the storage size of the varchar value is the actual length of the data entered, not the maximum size for this column. You can use char when the data entries in a column are expected to be the same size.
You can use varchar when the data entries in a column are expected to vary considerably in size.
MohammedU.
Microsoft SQL Server MVP
Moderator
SQL-Server-Performance.com All postings are provided “AS IS” with no warranties for accuracy.

I wasn’t asking for an explaination of what CHAR and VARCHAR was. I had even given an example showing that I knew (or I thought that I knew) what the difference was. Not sure why you gave me a definition that reiterated what I posted. I was asking why my table size didn’t change when I converted from a CHAR to VARCHAR. By your definition it should have saved a ton of space. Thanks for the reply. But by your definition, the table size should have changed. Live to Throw
Throw to Live
No, you have to reclaim the space before the size is reduced. There is a DBCC command for it, but it escapes me for now – somebody else will chime in soon.
I have tried UPDATE STATISTICS and UPDATEUSAGE, but it didn’t do anything.
I tried to put a clustered index on it figuring that it would regenerate all the pages, but nothing changed. Live to Throw
Throw to Live
I think
DBCC SHRINKFILE
or
DBCC SHRINKDATABASE
will reclaim space.
I believe you have to update all columns with RTRIM function… Check the following example…
create table #t (name Char(100), Name1 Varchar(100)) insert into #t
select ‘1234567890’, ‘1234567890’ select datalength(name)Cha, datalength(name1) Vcha from #t
/*
Cha Vcha
———– ———–
100 10
*/
alter table #t alter column name Varchar(100) null select datalength(name)Cha, datalength(name1) Vcha from #t /*
Cha Vcha
———– ———–
100 10
*/ update #t set name = rtrim(name) select datalength(name)Cha, datalength(name1) Vcha from #t /*
Cha Vcha
———– ———–
10 10
*/
MohammedU.
Microsoft SQL Server MVP
Moderator
SQL-Server-Performance.com All postings are provided “AS IS” with no warranties for accuracy.

quote:Originally posted by rklimes I think
DBCC SHRINKFILE
or

Shrinks the size of the specified data file or log file for the related database.
quote: DBCC SHRINKDATABASE
will reclaim space.

Shrinks the size of the data files in the specified database. Thanks, but I don’t need to shrink the files or my database. I need to figure out why converting a mostly blank column from CHAR to VARCHAR didn’t decrease the size of my table. I also tried to RTRIM(LTRIM()) two of my columns, then run the UPDATESTATICS and UPDATE USAGE on my tables and the file size didn’t change. I am going to try and create a new table with all VARCHAR and then copy the data to it. I will post if that fixed the problem or not. Live to Throw
Throw to Live
quote:I also tried to RTRIM(LTRIM()) two of my columns, then run the UPDATESTATICS and UPDATE USAGE on my tables and the file size didn’t change.

I also tried to RTRIM(LTRIM()) two of my columns, then run the UPDATESTATICS and UPDATE USAGE on my tables and the TABLE size didn’t change.
Live to Throw
Throw to Live
I have created a new table with only VARCHARs and have loaded it. The size didn’t change. I have tried it with and without NULLs. Same results. I also RTRIM(LTRIM()) to two of the columns, then did UPDATE STATS and UPDATEUSAGE and the size didn’t change. This doesn’t make any sense to me. I have converted columns before from CHAR to VARCHAR and I saw a definite change in size. Live to Throw
Throw to Live
I setup a clustered index on the table and did a DBCC INDEXDEFRAG on the table and no change in size. The output said that zero pages were removed. Live to Throw
Throw to Live
Anyway, my problem is solved!! Here’s what I did. 1. ran LTRIM(RTRIM()) on all 120 SN** columns – no change yet
2. ran UPDATE STATS & UPDATE USAGE – no change yet
3. INDEXDEFRAG – removed about 80% of the pages Now the table is about 20% of the previous. I’m guessing that the table was just too wide and testing on one column just wasn’t enough to reduce the size due to page sizes.
——————————————————————————–
Live to Throw
Throw to Live
Based on my testing RTRIM should change the size of the column but will not release empty space…
When you ran INDEXDEFRAG it removed unused pages and rearranged the all the pages… you should have seen the same effect whe you run DBCC DBREINDEX… 1. ran LTRIM(RTRIM()) on all 120 SN** columns – no change yet
2. INDEXDEFRAG – removed about 80% of the pages
MohammedU.
Microsoft SQL Server MVP
Moderator
SQL-Server-Performance.com All postings are provided “AS IS” with no warranties for accuracy.

I did not see a size change until I set all of the SN** columns to VARCHAR. I originally tried to trim and defrag with 2 SN** columns, then 4. No size change. I’m assuming that it is because the rows were so wide. I agree, I should have seen a size change when I did the trim and defrag originally. Live to Throw
Throw to Live
]]>