As there's a good join-condition the product join was choosen by the optmizer cause it's the best way to get the result.
Did you check in DBQL if the estimated number of rows is actually in that range?
What are the steps producing spool 26 and 27?
The actual rows are not in that range. And the stats on columns CLIENT_NBR and POP_ID are up to date.
Here are the steps for spool 26 and 27.
2) We do a group-AMP RETRIEVE step from Spool 2 (Last Use) by way
of an all-rows scan with a condition of (
"(population_client_user.USER_ID = 'XWL5KAM') AND
((population_client_user.CLIENT_NBR = 5610) AND
(population_client_user.POP_ID = 0 ))") into Spool 26
(all_amps) (compressed columns allowed), which is duplicated
on all AMPs. The size of Spool 26 is estimated with no
confidence to be 2,841,072 rows (102,278,592 bytes). The
estimated time for this step is 0.08 seconds.
16) We do an all-AMPs JOIN step from Spool 24 (Last Use) by way of a
RowHash match scan, which is joined to Spool 25 (Last Use) by way
of a RowHash match scan. Spool 24 and Spool 25 are joined using a
merge join, with a join condition of ("SK_HICL_CDE = SK_HICL_cde").
The result goes into Spool 27 (all_amps) (compressed columns
allowed), which is built locally on the AMPs. The size of Spool
27 is estimated with no confidence to be 110,976,868 rows (
46,832,238,296 bytes). The estimated time for this step is 7.10
The stats are up to date for these columns double checked what could be the issue.
is there any way to find stale stats on table.