Sliding window merge join is a costly operation compared to a rowkey based merge join, if your explain shows Rowkey based merge join i would not try changing anything there.
What about the situation if the left table (NPPI) fits entirely into memory, and the right table is partitioned (PPI)? In this case each rowhash of the left table is compared against the first partition, second partition, etc. but the left table has to be read into FSG cache only once.
Although the left table has to be de-partitioned, could this be cheaper than a rowkey based merge join, were the left table is a PPI table with only a few (or even one) row in each partition and only a few rows (or one row) in each right table partition were each left table partition is joined against its matching right table partition (and there are many of them...)?
I still wonder, why i get the traditional merge join in such a situation (see also the test setup I was using earlier in this thread)...
Nothing could be cheaper than Row key based merge join on PPI tables.
Optimizer will decide the join, based on the Data demographics or Stats collected on the tables, it can never independently decide to go for specific type of join always.
In your cases above, after collecting the suggested stats the optimizer always decided to go with merge join probably because of the reason your table partitions are very much finer ( 1 row per partition) and optimizer became well aware of the same.
one of the considerations for joins on PPI, coarser partitions are comparatively better than finer partitions. Can you please try with more rows in your partitions and check if the explains are still consistent and and also check if it starts using row key based join at some stage.
I tried out your ideas and finally, after increasing the table size from 70000 rows to millions of rows, the Optimizer finally switched to a row key based merge join in each single situation:
- Having a WHERE condition covering all partitions
- Having a JOIN only, without a WHERE condition on any partitions.
Strange is, that for the "small table test" (70000 rows only in each table), the Optimizer is chosing a more expensive join, even more frequently if it has statistics on PARTITION, the partition column (SalesDate in my case) and combinations of (PI,PARTITION,partition column). Somehow it assumes this is cheaper (maybe other cost factors are considered here apart from IOs and CPU seconds?)
Furhermore, it looks there seems to be a rule implemented for relatively small tables (whatever "small" is related to), which seems to be something like: "If all partitions are selected go for a traditional merge join, if not all partitions are selected do the rowkey based merge join" . I could watch this behaviour in all kind of setups: daily partitions, monthly partitions, etc.: If you have small tables with n partitions and select all partitions, the traditional merge join is done. If you select (n-1) partitions, the Optimizer uses the rowkey based merge join.
Whatever the reason for this beaviour is, it can be used to our advantage in SQL tuning.
thanks for your posting. So too fine partitioning might be the reason, that the optimizer chooses a sliding window merge join. Ok this is at least an explation why the optimizer does this.
Nevertheless: I don't think that it can be really faster to do so.