Records vs Speed!

Database
Enthusiast

Records vs Speed!

Hi All,

I am curious why when joining a large number of records, my application runs faster than it does when I have a small number of records.

More info:
- Some of the joins are quite complex...using name and address.

Table A is input data that changes (100 - 50,000 records).
Table B is fixed data (27 million records).

Eg.

TABLE A TABLE B APPROXIMATE TIME.
112 27,000,000 38 minutes
40,000 27,000,000 27 minutes

Just trying to understand how the Teradata works, and if I can improve my application for smaller amounts of input data.

Thanks!

Luke.
2 REPLIES
Enthusiast

Re: Records vs Speed!

Luke,

"Run faster" is a subjective measure. The number you really need to be looking at is CPU and I/O cycles as other workload running on the system at the same time will impact your query performance. [As well as a measure referred to as parallel efficiency...]

Similarly, join paths, index selection and filter type... the absence or presence of statistics on the objects' underlying columns, and the freshness of those statistics... table skew... all come into play in terms of the optimizer being able to generate the most optimal plan for returning your request.

Basically, you need to do more information gathering to really understand what is going on with your system.

hth,

-pt

Enthusiast

Re: Records vs Speed!

you may use explain to see it :-)
sometimes it was said that PE decides that when little amount of data , product join is much better than use merge join ,perhaps it's the case