We're transitioning from Oracle to Teradata, and we need to replicate our existing ELT process and i'm after some advice/opinions about available options.
We have a file -> stage process already in place, using TPT LOAD
We need to replicate our existing SQL+ codebase, to move the data from multiple stage databases into our normalized central database, on the same TD server
A typical workflow consists of a parent level SQL+ script, calling multiple other SQL+ scripts, with multiple levels of nested scripts
Each 'leaf' script can contain both DDL and DML, typically these are complex Insert...Select.... statements, joining multiple large tables (billion+ rows)
The scripts are generic enough to support multiple clients with different workflows, and we make heavy use of substitution variables and dynamic sql
I'm new to teradata, but it seems we have 2 choices:
Doesn't support nested script calls, so would need to write a shell wrapper that either: constructed a BTEQ script with multiple ".run file=xyz.sql" commands; or invoked a new BTEQ session for each sql script
Shell script would also need to make a local copy of each .sql script and perform the client-specific substitution
2) TPT using $DDL operator
Probably define a job script with multile steps, each step running one or more sql statements using DDL.
Advantage is we could define client-specific variables in the variable file
But we would need to transform each sql script to a string value, ie double quotes, leading whitespace on each line etc not ideal for complex scripts
#1 seems easier and more robust (esp from an error reporting perspective) but am wondering how others have handled this, any alternatives...?
Any feedback appreciated, thanks
If you already have SQL scripts prepared, the best & quick way is to use BTEQ scripts. You can simply embed the existing SQL scripts in BTEQ, use BTEQ Conditional logic & error handling to control the flow and exceptions.
In addition you can use the Shell scripts to make the logs and metadata extraction more easy.