I used the NEXUS tool do a compression on a table. It showed me more than 20% compression. When i implemented the same DDL in production, there was hardly any compression on the table.

Any idea what could have casued this.

This is a little urgent. Early reply will be highly appreciated.


Re: Compression-issue

Did you do the compression analysis against development data?

The compress lists need to reflect the data demographics of the real table therefore the analysis is usually done against production data. 

For big tables it is usually good enough to do it on a random sample of the data.


Re: Compression-issue

When you did the analysis against your production table, you should ask the vendor why Nexus not working as expected.

I don't know about Nexus, but the output of AtanaSoft's Compress is usually quite correct.


Re: Compression-issue

Hi Dnoeth/Ulrich,

We had done it correct. When we used the Nexus tool, it didn't create the secondary index while generating the compressed DDL (dont know why?). When we implemented in prod, we created the secondary index. So, the size of the table was increased.

I came to know this today when i applied the secondary index on the table created by nexus.

But it really surprising, the comprssed table is 1.6 TB and the when we applied the sec index, it became 2.8 TB difference of (1.2 TB !!) Good learning, but a harder way !!

Generally what % of the base table space does a sec index take ?


Re: Compression-issue

It will depend on your sec index.

You can find in the manuals some formulars to calculate the size of a sec index.

The feret utiliy can also be used.

Google teradata sec index size and you will find many links.

A bruce force method I am using often is to create a X million row (1 is often sufficient but for big systems X might be bigger) sample of the table. Calculate the size, drop the sec index, calculate the size and calculate the % for the sec index. It's a proxi but worked fine so far.


Re: Compression-issue

What index did you create?

I suppose a USI on multiple columns, this might be quite large.

Of course there might be other reasons, too:

An index on a compressed column will not be compressed or the record size is quite small.

Can you should the DDL?


Re: Compression-issue

I have an NUSI  on multiple column and the one of the column is the one on which i have compression.



      Col1  VARCHAR(150) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL COMPRESS ( '7','E','8','P','9','H','G','12','11','J','00000','PO','K','20',



INDEX ( Col1 ,Col2 );

The snippet of DDL..

The table is around 3 TB, after compreesion (without sec index), the size is 1.6 TB. When we create the sec index, it goes up by 3 TB.

1. Can we not compress on a column which is part of secondary index? or we don't get the desired result of compression ?

2. I read somewhere that we can compress, however the Mload to the table will be slow

Re: Compression-issue

 Hi Dieter,

We have proposed MVC to our Client in T12 setup. Have conveyed technically that compression will not impact data either way. A developer has popped up with a good concern that some of these columns are derived from bases that can change , If any of these bases change the profile of the data in the tables will change, which means that a totally new set of ‘ideal’ compression values would apply.

Also he has queried

How often would the compression values be reviewed?

I my understanding, If the column values are more volatile for derived columns then we do not suggest applying the compression. Can you suggest your inputs on it if any.



Re: Compression-issue

I would recommend PRISE Compress Wizard to implement MVC compression, you can get a full functional free trial here: