We have one table which is partition on a date_key field.
By using this table with a inner join with time dimension on sql , we manage to have on plan dynamic partition elimination.
We have stats on system partition column , on partition (date_key) column and on the week_key of the dimension.
The issue is the expecting rows which are more than 60% wrong (the table has 100 M rows) and more or less 30 columns.
We even use "USING MAXINTERVALS 500" in order to be as accurate as possible.
If we use directly a where clause then the expecting rows are correct.
How can we fix this issue? This is a serious issue for us , since the whole query repeats this step more than 20 times , and it leads to totally wrong plan ( highly skew....).
and TD version is 15
Thank you in advance.
You get Dynamic Partiton Elimination when the optimizer doesn't now about actual values.
Due to this exact row counts can't be calculated from statistics and increasing intervals doesn't help.
Can you show the PPI definition and the join(s)?
Maybe it can be replaced by Scalar Subqueries.
It is only one level of partition ,
SyntaxEditor Code Snippet
PARTITION BY ( RANGE_N(Week_Key BETWEEN -1 AND 3500 EACH 1 , NO RANGE, UNKNOWN))
(and tried the same with hashbacket(....))
This sql is created from MSTR dynamic , so we cannot change it to scalar one.
If i decrease the number of partitions and apply SOFT RI , any index? , should it help?
On PI, is it better to keep the partition column?
Sorry copy paste error , the join is on the date_key and the partition on the date_key (same syntax as i wrote before).