We are currently deploying a new user interface and navigational structure on our Community. These design changes align with our new Teradata Access portal and provide a more cohesive experience for our users.
Parallelism is increased over the traditional database parallelism by using file streams as opposed to writing to disk. This allows each data stream to be shared with multiple instances of each operator. This increases parallelism and speeds up the entire data load process.
We have source files in the ETL server. We fast load data to staging tables. We fast export to files and send these files to FTP location.
How this parallel is works here? Are standalone utilities use any kind of writing to disk activities for the above case ? Please correct me.
I can see advantage like: 1) When I see at ETL perspective, when joining multiple sources the result written to disk files. Then these disk files used for further processing. Here replaced data steam, as you said will improve the performance. 2) When move data between prod to test. No need to do fast export then fast load or multiload. When use TPT, we can avoid one step.