When an EXPLAIN is run on Teradata, does the plan that gets generated rely soley on the statistics and data at rest or does the current system load get taken into account?
If the system load is ignored, isn't the plan that gets generated always for an ideal system (i.e. I'm the only user)?
The current system load isn't taken into account. The optimizer analyzes the different paths you can take to resolve the query and choose the one with the lowest cost.
that's what I was thinking. It would be interesting to have the time costs that are listed with each step in the EXPLAIN to reflect the additional time given a heaver system load.
Because workload is so widely variable an Explain that varied the estimates based on workload at the time of the explain would be widely varying and not comparable at all from one plan to the next or one query to the next. We feel it is better to have the estimates based on a contant baseline so that the estimates are comparable regardless of when the explain is retrieved.
If your original question was asking whether the plan itself (not just the time estimates) would/should change based on workload, then the answer is no. Teradata assumes that queries will always be executing in a concurrent, mixed workload environment and plans accordingly. For instance, there is no plan that uses all of memory for a single query assuming it has the system itself or plans to use more temp space if it has the system to itself. Teradata always optimizes for minimum resource cost of the plan, and all the execution operators and their use of resources are designed to work in a high concurrency environment.
Of course, the "time" that is presented in the explain is not intended to be a real elapsed time even on an empty system. It is a relative cost metric rolled up from underlying resource requirements and is meant to be able to be compared between plans and understood in relative terms rather than being used to predict actual response time.