There is a requirement like this:
A column from source whose value is UTC millisecond, for example: 1439374041
Then does anyone know, in Teradata, is there any function can convert UTC millisecond to Timestamp (6)?
1439374041 -> 2015-08-12 10:07:21.000000
If no default function in Teradata can help on this, does anybody know any SQL statement can achive this kind of conversion? Thanks for help.
thanks Dnoeth. So how about this example? This is a UTC time with millisecond:
1439374041000 -> 2015-08-12 10:07:21.000000
SELECT CAST(1439374041111 AS BIGINT) AS ut,
TO_TIMESTAMP(ut/1000) + ut MOD 1000 * INTERVAL '0.001' SECOND
Is there a possibility to create a mirror image of this function? I'm trying to have current_timestamp represented as INTEGER/BIGINT. The point of the game is to have a roughly unique process id (uniqueness within 1 second would be OK, anything more detailed would be perfect).
I was experimenting with HASHBUCKET(HASHROW(CURRENT_TIMESTAMP)) but it turned out that with bad luck you can run into troubles.
this is a modfied Timestamp to Unixtime calculation including centiseconds:
REPLACE FUNCTION TimeStamp_to_Unixtime_centisec (ts TIMESTAMP(6))
SQL SECURITY DEFINER
INLINE TYPE 1
(CAST((CAST(ts AS DATE) - DATE '1970-01-01') AS DEC(18,6)) * 86400
+ (EXTRACT(HOUR FROM ts) * 3600)
+ (EXTRACT(MINUTE FROM ts) * 60)
+ (EXTRACT(SECOND FROM ts))) * 100;
first of all - thanks for your help.
And for the future, so that others do not kick open doors again (or skip the doors I kicked :)) I will post my small findings/thoughts on the generating unique process id on the fly.
I went through following options (I left major observations next to them):
- HASHBUCKET(HASHROW(CURRENT_TIMESTAMP)), this is nice if you want to achieve local uniqueness (2 runs, 1 after another will most probably not return same id), but since the hashes reduce the numbers to circa 1 mio it can stop to be globally unique quite fast
- we can use Dieter function to get the BIGINT value, that gives us really nice precision (but I could imagine situation that is not locally unique)
- we can use Dieter function but stop the precision at INTEGER, here again it can stop to be locally unique, plus at some time, if run frequently we can go out of INTEGER maximum
- we can use a table and store there id (ETL like solution), this is for sure working, but is creating quite some overhead
- we can think of identity columns within that table, but again we go back to ETL like solution
Since my process id does not need local uniqueness (it will run 1 per 10 minutes) I finally decided on timestamp converted to INTEGER. And I kept HASHing as a “run instance” indicator just in logs. So I have sth like that (not exactly, but that represents how I was testing :)):
--TIMESTAMP TO TRANSFORM
CAST('2020-12-31 23:59:59.999999+00:00' AS TIMESTAMP) AS LKTMS1 --AS LKTMS
,CURRENT_TIMESTAMP AS LKTMS --AS LKTMS1
,cast(LKTMS AS DATE)
,EXTRACT(HOUR FROM LKTMS)
,EXTRACT(MINUTE FROM LKTMS)
,EXTRACT(SECOND FROM LKTMS)
CAST((CAST(LKTMS AS DATE) - DATE '2016-01-01') AS DEC(18,6)) * 86400
+ (EXTRACT(HOUR FROM LKTMS) * 3600)
+ (EXTRACT(MINUTE FROM LKTMS) * 60)
+ (EXTRACT(SECOND FROM LKTMS))
) AS NUMERIC_CALCULATION
, HASHBUCKET(HASHROW(CURRENT_TIMESTAMP)) AS RILU_Numeric_Unique_Id
, CAST(((NUMERIC_CALCULATION)* 100)AS BIGINT) BIGU_Numeric_Unique_Id
, CAST(((NUMERIC_CALCULATION)* 1)AS INTEGER) RIGU_Numeric_Unique_Id
For now it works. :)
Thanks again for your help.