Delete some data or use smaller data types (e.g. SMALLINT instead of INTEGER) if you can.
The scenario here is that we have performed compression on all possible columns except PI and PPI columns,still they want to save the space.so any other possibility apart from deleting data?
If block level compression still doesn't free up enough space, I'd elaborate on an earlier suggestion about data types.
Execute a min(field) and a max(field) for every numeric field in your largest tables.
Compare those results to the space used by the different data types listed below. Alter the data type for any field that falls in the range of a smaller type. On millions of rows those kinds of changes can add up pretty fast.
Byteint – 1 Byte range -128 to +127
Smallint – 2 Bytes range -32,768 to +32,767
Integer – 4 Bytes range +/- 2,147,483,647
Bigint – 8 Bytes range +/- 9,223,372,036,854,775,807
Decimal(1,0) to Decimal(2,0) – 1 Byte up to 99
Decimal(3,0) to Decimal(4,0) – 2 Bytes up to 9999
Decimal(5,0) to Decimal(9,0) – 4 Bytes up to 999,999,999
Decimal(10,0) to Decimal(18,0) – 8 Bytes up to 999,999,999,999,999,999
Decimal(19,0) to Decimal(38,0) – 16 Bytes up to
Another test on your numeric fields would be to see how many rows fall outside the absolute value of a data types range. If a very small number of rows fall outside that range, those might be outliers - a subject matter expect could certainly shed light on that.
Then you'd have 2 choices: 1) delete them completely from the orginal table and change the data type on the table or 2) create a 2nd table just to house those extreme rows while changing the data type on the original table.
One last thing - doublecheck your system parameters to see the default decimal data type that gets created from a CREATE TABLE AS statement. At one site, I saw the default had been set to Decimal(38,0) - probably way bigger than anything you'd ever need.