I have a requirement to load the data from 2009 onwards and snapshots will be deilvered with a one week of gap starting from 2009-01-01. I am planning to use valid time temporal table to hold the data becuase business team wants to have time travel back capability in data. My input data snaps lookas as follows:
Data as of 2009-01-01
Data as of 2009-01-08
Data as of 2009-01-16 and so on till date.
When i load my first data set, i can let the valid time starting from 2009-01-01 until changed. But when i get my first snapshot say on 2015-07-01 with snap date as 2009-01-08 i need to end the valid time of the records for the previous snap as of 2009-01-08 and start a new row with valid time as '2009-01-08. If i try doing it with current validtime merge option, system is ending the previous snap as of '2015-07-01' and starting the new snap as '2015-07-01' and until_changed.
FYI, my ongoing snaps are deltas and they will be non-temporal tables on Teradata. Is there any way that i can override the period of validity while processing the merge into statements.
Here is how my merge statements looks
MERGE INTO tempo_customer t
USING customer_metrics_ld s
ON t.key = s.ley
WHEN MATCHED THEN
all columns (except valid time duration column)
WHEN NOT MATCHED THEN
even for the newly inserted rows (when unmatched, i am seeing the period of validaty as '2015-07-01' rather than '2009-01-08'.
Any help would be greatly appreciated.
Small correction, for the newly inserted rows period of validity is starting from '2009-01-08'. So i am good with that. But historic rows i still need some help from the experts.
I think i am able to get this resolved. I used sequenced validtime. While getting the data from delta table (which is a non temporal table) i convereted the data into a temporal result set and applied the period of validaity accordingly. However one major difference i observed is that when my target table is partitioned it is taking lot of time for the merge to be processed.
Non-ppi valid time table merge is taking 3 minutes and 40 seconds, where as ppi valid time table is taking around 14 minutes and 40 seconds.
Not sure why though.
Hello there. I am facing the same problem. What do you mean you convert the data into a temporal result set? How can i do that?
Thank you very much.
Hi @snelluri I also have to do similar task to do. I have data in my historical table where we have type 2 with PIT(Point in Time) start date and PIT end dates. I need to load that complete data into my new table which is a temporal table. I am looking to prepare a single script which should read PIT start date and PIT end date for each row from table one and then assign then as validity start date and validity end date in temporal table. Can you please give more details on "converted date into a temporal results set".
I am having the same problem , But still can not resolve it , Please if you can help , It will highly appreaciated
I insert a row with start date =1Jan2018 to high date , But When I Insert a second row with same PK and with start date =1 FEb2018. I was expected that the first row to be closed at 31Jan2018 But this dose not happened.