Why would you want to change a data type of varchar(10) to char(1) For a large table (~17 million records)? Is it truly much more expensive due to the 2 byte overhead? thanks
From a design perspective, if the data is CHAR(1) then (personally) I'd design it that way, it makes the design 'truly' match the data which I think is a sensible idea.
Does it make any practical difference, probably not.
Using VARCHAR you're using 3 bytes (Latin) or 4 bytes (unicode) to store 1 character (again from a design perspective that doesn't make sense to me), so there is additional space required.
I ran some simple tests using a 17m row table.
- simple select against character data averaged 2.57 (for CHAR) vs 2.72 (for VARCHAR) cpu seconds
- PI:PI join on a UPI integer column AND character column averaged 17.30 (for CHAR) vs. 17.53 (for VARCHAR) cpu seconds
- aggregation by this character data (26 groups) averaged 9.85 (for CHAR) vs. 12.01 (for VARCHAR) cpu seconds
Sure there's a measurable difference, but these are small numbers. Also, remember that "your mileage may vary".