we have one table in teradata having millions of records. Now we want to change a datatype of a column(having data in it). what is the best way to do this ?
while doing this which type of locks would be applicable to the table ?
Thanks in advance.
You should check first if it's allowed (see DDL manual) to change the datatype using Alter Table.
If it's actually allowed it might be possible without modification of the data (e.g. increase a VarChar size) and then it's really fast.
Otherwise do an insert/select in a new table plus drop/rename.
Thanks for your quick reply.
In case it is a critical production table and we can not stop application running concerned with it then what is the quick solution we can perform.
is it possible without production downtime ? as it involves a BIG risk
The table is having 40 million records and we want to change the datatype of ramount column from DECIMAL(12,2) to DECIMAL(10,2)
the data present in the amount column is of length 10 digits including decimal. No additional indexex present.
You can't decrease the precision of a column using ALTER TABLE.
But why do you want to do that?
Both DEC(12,2) and DEC(10,2) need the same space and you could simply add a CHECK(ramount BETWEEN -99999999.99 AND 99999999.99)
This will require a single Full Table Scan, which should be quite fast on 40 million rows.
About changing datatype from now when creating new tables for an application existing column (ID_COL):
i just wonder wether a use of INTEGER instead of BIGINT could impact join performances between historical tables and new tables ?
HIST_ID_COL (BIGINT) JOIN NEW_ID_COL (INTEGER)
Well, except the fact a same column with two datatypes is not suitable in a physical model ...
NB: BIGINT is a waste of space when ID value is never above 110 000 000.
as INT and BIGINT hash the same the join should be exactly the same.
same column with two datatypes is not suitable in a physical model
Even more in a logical model :-)
you are right,
an ultimate question (on this forum): is unuseful BIGINT a waste of space in spool too (8 bytes occupied instead of 4 for each value) ?