Multi Level Partitioning v/s secondary indexes

Database
Enthusiast

Multi Level Partitioning v/s secondary indexes

Which one is better from performance wise.

What is the criteria to either go for partitioning or indexes ?

Regards,
Subhash
3 REPLIES
Enthusiast

Re: Multi Level Partitioning v/s secondary indexes

For secondary indexes maintainence is more as subtables are involved but for partitions no such extra sub-tables are required.
Also,when partitioning column is not part of PI then you can not define it as Unique and in that case to enforce uniquness,you can define a USI.
Enthusiast

Re: Multi Level Partitioning v/s secondary indexes

Define a Partitioned Primary Index on any table depends on how its rows will be accessed majority of the time. It improves performance of certain types of queries: mainly good for range queries. It works on partition elimination technique.

The reason to define a PPI on any table is to increase query performance by avoiding FTS without the overhead of secondary indexes.

The main disadvantage of PPI is when Partition column is not part of PI and for any query value of PI columns are specified but not of Partition columns, query will have to look at each partition instead going directly go to the first row with PI value.
Enthusiast

Re: Multi Level Partitioning v/s secondary indexes

Hello,
I have been looking for answers to a basic question on table partitioning in Teradata. Can someone please shed more light. Here it is -

If we do Range Partitioning on a table based on Numeric values and then different jobs load data corresponding to different partition ranges in parallel - then do we get into a resource contention scenario OR do all parallel running jobs hit specific partition and only acquire lock on a particular partition. In short - the question is - while loading data into a partitioned table [in this case Range Partition], do we acquire a lock at the partition level and multiple programs can run in parallel acquiring lock at individual partitions OR the lock is acquired at whole table level and the parallel jobs get into a contention mode.

Ex: An error table where each program loads specific error information that is encountered based on error code. If the table is range partitioned on error codes of range of 1000s, will individual programs when run in parallel hit their specific partition acquiring only lock to it or they will acquire complete table lock and release them for subsequent jobs when error load is done.

Please advice.