Many conversion functions!!!! After all , it has to undergo conversion and it is software and not magic. It will hit performance. If it is me, I will study the source system well and decide what I want further downstream. Same columns are converted repititively in different ways. If it can be done once and then do the checking, it may help.My thought!!!. Try to check with explain and fine tune.
Thanks Raja for responding !
Can you also explain how coalesce function works internally in teradata ?
And also how it works for the above scenario i posted
I suggest you have a look at this link, about coalesce:
Run explain and see the steps and performance behaviour.
When you write : coalesce(coalesce(s.column2,default_varchar_value), w.column2)
The inner coalesce return s.column2 if is not null or default_varchar_value, then never return a null value
The outher coalesce never will return w.column2 because nevwe is null.
Check the previous posts , explain && coalesce syntax .
Furthermore , always try to avoid implicits conversions from Teradata , always use cast - even for null-.
This has logical and physical reasons. First you know your data , what kind of datatype you expect. So , no possible extra casting on next steps .
From physical aspect , there is a feature in TD with varchar column which partiticipated on group by / order by ,so on your example when you use varchar(250) then it will exploit that column to 250(or 500 bytes for Unicode) multiplied by the number of the rows that you have. This leads to more spool ,more CPU/IO in order to get / handle the blocks - data.
So , be precise with the format , this will help you on any function.