Compression is a good first start. There are tools out there to do analysis for possible values. AtanaSuite has the best one I've used. For things like sales and quantity, odds are there are a certain number of values that occur most frequently. These would be your candidates for compression.
How many rows are in this table? And how large is it actually getting (in GB)?
You wouldn't specify them at table creation. You would need to do analysis after the table has been in use for a while and find the commonly occurring values and then recreate the table with those compressed.
But based on your row count of 25k, I wouldn't be terribly concerned with performance due to size. I'd focus more on indexing and statistics for performance. 25K rows, even on a small Teradata system is fairly trivial.
I just remembered though, that depending on which Teradata version you're on, there is algorithmic compression available. This is in Teradata 13.10, I believe.
It acts essentially like WinZip or gzip, in that it applies a compression algorithm to the entire table regardless of values present. The down side is that you now have additional overhead involved in compressing and uncompressing the rows. This overhead isn't a factor with standard MVC.
In a 25K rows scenario, if the datatype is VARCHAR, the compression cannot be done on that coulumn. There is a Primary Index with uncompressable datatype with large character wide also will narrow down the possiblity of reducing the size of the table. TD 13.10, MVC has been enhanced well and now it can be used with variable-length character columns- the VARCHAR datatype up to 510 characters wide. It can support any numeric type, all character data, GRAPHIC, VARGRAPHIC, BYTE and VARBYTE.