Data redistribution during large table Joins

Database
The Teradata Database channel includes discussions around advanced Teradata features such as high-performance parallel database technology, the optimizer, mixed workload management solutions, and other related technologies.
N/A

Data redistribution during large table Joins

How data redistribution takes place while joining large tables?

Consider two large tables where redistribution has to take place, as it is large table it can not fit into Spool space. where the data will be stored during redistribution. How TD handles this, what is the background data flow?

 

 

 

 

2 REPLIES
rjg
N/A

Re: Data redistribution during large table Joins

dinesh,

Check out the explain plan for your query for details.

Also for a graphic depiction try using Visual explain

 

Rglass

 

Teradata Employee

Re: Data redistribution during large table Joins

The explain plan will describe when/if redistribution happens in a plan. A join does not always have to redistribute both tables and the explain will tell how each table is accessed.

 

If a redistribution does occur, then the result of the redistribution does need to be placed in a spool file. If the plan says redistribution, it will also tell where the result is placed.  

 

Redistribution is not done on the whole source table if at all possible. If there are any selection criteria to reduce the set of rows, the optimizer will always apply them as early as possible in the plan reducing the number of rows that need to be redistributed. If a subset of columns is referenced in the query, then the optimizer removes those columns as early as possible in the plan reducing the number of bytes per row that need to be redistributed or spooled. Every effort is made to preserve compression attributes as data is carried through the plan reducing the bytes that have to be redistributed or spooled.

 

In a simple case the flow of a redistribution step will be (on all AMPs simultaneously):

- Read next row from source table

- Apply any filter conditions to determine if the row should participate - back to read next row if not

- Build row to redistribute from the source row, leaving out all columns that do not participate in the rest of the query

- Run the hash algorithm on the column(s) on which the data is to be redistributed

- Send the row to the BYNet messaging system which will deliver the row to the proper AMP based on the row hash and the map of hash buckets - back to read next row

- On the receiving side (also on all AMPs simultaneously), receive the redistributed row and write it to the spool file specified in the plan

The above is the logical flow. There are many optimizations behind the curtain. There are in-memory redistribution buffers so that the BYNet does not have to send every row. The source rows are read in blocks or cylinders. The recceived rows are built into blocks in memory before being written to disk,...

 

If the user says select *, provides no constraints and has no compression on the source tables, then the entire table will have to be redistributed and spooled and space for those spools will be required.