Poor merge performance with PDCR table DBQLObjTbl_hst after upgrade to Teradata 15.10

Viewpoint

Poor merge performance with PDCR table DBQLObjTbl_hst after upgrade to Teradata 15.10

We upgraded our production system to Teradata 15.10 (from 14.0) last weekend.  We installed the corresponding PDCR version following the upgrade and I noticed then that when I ran the 3rd phase of the setup script (copying data from old tables to new tables) that the copy of the DBQLObjTbl_Hst_v1400 table to the new DBQLObjTbl_Hst table was taking hours.  We ultimately aborted the script, renamed the v1400 table to preserve the data, created an empty DBQLObj_Tbl_Hst_v1400 table and restarted the script to get past that copy step.  Now, a week later, the daily Viewpoint job to copy data from DBC to DBQLObjTbl_Hst has been running for over 9 hours with no sign of finishing.  The SQL that is being executed is an insert into PDCRDATA.DBQLObjTbl_Hst from PDCRSTG.DBQLObjTbl_Tmp. The explain shows that it is in a merge step from Spool into DBQLObjTbl_Hst.  Estimated time is about 22 minutes with estimated 107 million rows but the step has been running almost 10 hours now.  

 

Has anyone seen this performance issue with the PDCR jobs for this table?  Suggestions on how to improve the performance?

 

Thanks,  

 

Chuck

7 REPLIES
Junior Contributor

Re: Poor merge performance with PDCR table DBQLObjTbl_hst after upgrade to Teradata 15.10

Can you show the DDL of your PDCRDATA.DBQLObjTbl_Hst?

 

Re: Poor merge performance with PDCR table DBQLObjTbl_hst after upgrade to Teradata 15.10

Here's the DDL:

 

CREATE SET TABLE PDCRDATA.DBQLObjTbl_Hst ,NO FALLBACK ,
NO BEFORE JOURNAL,
NO AFTER JOURNAL,
CHECKSUM = DEFAULT,
DEFAULT MERGEBLOCKRATIO
(
LogDate DATE FORMAT 'yyyy/mm/dd' NOT NULL,
ProcID DECIMAL(5,0) FORMAT '-(5)9' NOT NULL,
CollectTimeStamp TIMESTAMP(6) FORMAT 'YYYY-MM-DDBHH:MI:SS' NOT NULL,
QueryID DECIMAL(18,0) FORMAT '--Z(17)9' NOT NULL,
ZoneID BYTE(4),
ObjectDatabaseName VARCHAR(128) CHARACTER SET UNICODE NOT CASESPECIFIC COMPRESS USING TD_SYSFNLIB.TRANSUNICODETOUTF8 DECOMPRESS USING TD_SYSFNLIB.TRANSUTF8TOUNICODE ,
ObjectTableName VARCHAR(128) CHARACTER SET UNICODE NOT CASESPECIFIC COMPRESS USING TD_SYSFNLIB.TRANSUNICODETOUTF8 DECOMPRESS USING TD_SYSFNLIB.TRANSUTF8TOUNICODE ,
ObjectColumnName VARCHAR(128) CHARACTER SET UNICODE NOT CASESPECIFIC COMPRESS USING TD_SYSFNLIB.TRANSUNICODETOUTF8 DECOMPRESS USING TD_SYSFNLIB.TRANSUTF8TOUNICODE ,
ObjectID BYTE(4) NOT NULL,
ObjectNum INTEGER FORMAT '--,---,---,--9' COMPRESS (0 ,1 ),
ObjectType CHAR(3) CHARACTER SET LATIN NOT CASESPECIFIC NOT NULL COMPRESS ('Col','DB ','Idx','JIx','Mac','Tab','UDF','Viw'),
FreqofUse INTEGER FORMAT '--,---,---,--9' COMPRESS (1 ,2 ,3 ,4 ,5 ),
TypeofUse BYTEINT FORMAT '--9',
ExtraField1 BYTE(4),
ExtraField2 CHAR(5) CHARACTER SET LATIN NOT CASESPECIFIC,
ExtraField3 VARCHAR(128) CHARACTER SET UNICODE NOT CASESPECIFIC,
ExtraField4 VARCHAR(128) CHARACTER SET UNICODE NOT CASESPECIFIC,
ExtraField5 VARCHAR(128) CHARACTER SET UNICODE NOT CASESPECIFIC)
PRIMARY INDEX NUPI_DBQLObjTbl_Hst ( LogDate ,QueryID )
PARTITION BY RANGE_N(LogDate BETWEEN DATE '2013-01-01' AND DATE '2018-12-31' EACH INTERVAL '1' DAY );

Junior Contributor

Re: Poor merge performance with PDCR table DBQLObjTbl_hst after upgrade to Teradata 15.10

Looks like the standard DDL (I never understood why it's SET instead of MULTISET).

 

Did you check if this step used a lot CPU?

Of course SET will cause some additional overhead, but there can't be that huge numbers of rows per queryid.

 

Is the target table block level compressed?

Re: Poor merge performance with PDCR table DBQLObjTbl_hst after upgrade to Teradata 15.10

Yes, the CPU was extremely high; over 1 million seconds.  The job finally completed after 12.5 hours.  Looking at the history for this job (we only have 1 week since that's when the job started running in Viewpoint) the 107 million rows that it was inserting was much higher than normal.  The next highest run inserted 27 million rows and ran for 2.5 hours and the rest of the days fewer than 15 million rows were inserted in under an hour.  I think this one got extended so much because it ran into prime user time and there was a lot of contention for CPU.  We aren't using block level compression at this time.

 

Thank you for your time!

Junior Supporter

Re: Poor merge performance with PDCR table DBQLObjTbl_hst after upgrade to Teradata 15.10

The high number of inserts warrants some investigation. It is possibly one or a few sessions doing a lot of single row transactions. That would result in a lot of 'duplicate row checking'.

Simple way to avoid the duplicate row check, by changing the _HST tables to MULTISET.

 

Teradata Frank, Certified Master
Junior Contributor

Re: Poor merge performance with PDCR table DBQLObjTbl_hst after upgrade to Teradata 15.10

Hi Frank,

single row inserts will have different QueryIDs, so this should not result in duplicate row checks for this table (it's really bad in dbc.acclogtbl).

Fan

Re: Poor merge performance with PDCR table DBQLObjTbl_hst after upgrade to Teradata 15.10

did you noticed if the PI's changed? I thought at one point these tables were PI'd by logdate,queryid,procid, but then changed to logdate,procid in a later DB release.