Recently our Teradata server was physically moved from one location (west coast) to another location (east coast). We use Oracle database gateway to connect Oralce to Teradata. After the migration, moving data from Teradata to Oracle is taking 10 times longer time to complete.
Just wondering what could be the reason of slowing the data transfer. Is there a way to get it resolved?
Any help is much appreciated.
Thanks in advance for the help!
I would first try to figure out where the additional time is being spent, e.g. using DBQL. It may be related to simply being on a different type of platform and/or different release of software rather than geographic location, for example.
Some Oracle gateway operations do a lot of metadata queries against DBC "X views". If you don't have recommended statistics collected on the core DBC dictionary tables, those queries can consume significant CPU and elapsed time.
If the problem is transferring the results after the main SELECT request has completed execution, it's likely to be something network related.
And so on...
Thank you for the suggestions.
One follow-up question for #3 "....It's likely to be something network related." The network refers to Teradata or Oracle?
For a network issue, it could be anything on the path between the two systems. I wouldn't expect it to be on the network interface on the Oracle server or first "hop" on that end, since that hasn't changed.
Our Teradata server was recently upgraded to version 16, however the Teradata client version on the Oracle side is 14. Would this have impact to the performance?
-- Below is what we run in Oracle (a simple Create Table statement)
Create table table_name as
from table_name in Teradata@DBlink (DB link created in Oracle using gateway connection)
# HS init parameters
# Environment variables required for the non-Oracle system
TTU 14.10 is not officially certified/supported with TD 16.x database (and 14.10 software maintenance ended some time ago). That said, for such a simple query I think it's unlikely that using the old driver is causing your performance problem.
Check DBQL on the Teradata side. Is the time being spent running the SELECT itself (between FirstStepTime and FirstRespTime), waiting in workload queue on TD (between StartTime and FirstStepTime), establishing the connection and obtaining metadata (prior to the SELECT StartTime), or after the SELECT has finished executing and starts to return data (FirstRespTime)? If you use DBQL collection MODE=3 then you can also see when the data transfer finished (LastRespTime).
In order to check DBQL on the Teradata side, I assume that I would need to have DBA access privilege? Unfortunately, this kind of privilege cannot be granted to non-IT users in my case.
Anyway, I tried something else - ran a query against the DBC.QRYLOG table in Teradata, where I get the following info - as we can see the last two records: the first "READ" locklevel started at 2:47 pm and the second "READ" locklevel started at 4:08 pm. The time elapsed between these two steps is how long it took for the table to be created in Oracle.
We were told that the Teradata server is on a new hardware in the new data center. Oracle DBA is going to look at the setting where they can increase buffer size on the client settings to retrieve data faster.
|2:37:14 PM||NOLOCK||2:46:57 PM||2:46:57 PM||2:46:57 PM||6/12/2019 14:46||0:00:00.080000||Help||Other SysOther||HELP SESSION;|
|2:37:14 PM||ACCESS||2:46:57 PM||2:46:57 PM||2:46:58 PM||6/12/2019 14:47||0:00:03.570000||Select||Select||SELECT NULL(VARCHAR(30), NAMED TABLE_CAT),DATABASENAME(VARCHAR(30), NAMED TABLE_SCHEM),TRIM(TABLENAME(VARCHAR(30)))(NAMED TABLE_NAME),TRIM(SUBSTR('TABLE VIEW TABLE MACRO JOURN|
|2:37:14 PM||ACCESS||2:46:57 PM||2:47:01 PM||2:47:02 PM||6/12/2019 14:47||0:00:07.710000||Select||Select||SELECT NULL (VARCHAR(30), NAMED TABLE_CAT), DATABASENAME (VARCHAR(30), NAMED TABLE_SCHEM), TABLENAME (VARCHAR(30), NAMED TABLE_NAME), COLUMNNAME (VARCHAR(30), NAMED COLUMN_NAME), -1 (SMALLINT, NAMED D|
|2:47:14 PM||ACCESS||2:46:57 PM||2:47:09 PM||2:47:10 PM||6/12/2019 14:47||0:00:13.190000||Select||Select||SELECT NULL(VARCHAR(30), NAMED TABLE_CAT),DATABASENAME(VARCHAR(30), NAMED TABLE_SCHEM),TRIM(TABLENAME(VARCHAR(30)))(NAMED TABLE_NAME),TRIM(SUBSTR('TABLE VIEW TABLE MACRO JOURN|
|2:47:14 PM||ACCESS||2:46:57 PM||2:47:22 PM||2:47:22 PM||6/12/2019 14:47||0:00:00.040000||Select||Other No group||SELECT "COL1","COL2","COL3" FROM "SCHEMA_NAME"."TABLE_NAME";|
|2:47:14 PM||READ||2:46:57 PM||2:47:22 PM||2:47:22 PM||6/12/2019 14:47||0:00:00.100000||Select||Select||SELECT "COL1","COL2","COL3" FROM "SCHEMA_NAME"."TABLE_NAME";|
|4:07:14 PM||READ||2:46:57 PM||4:08:42 PM||4:08:42 PM||6/12/2019 16:08||0:00:00.010000||Commit Work||Other No group||COMMIT;|
Yes, it certainly seems that data retrieval is the issue. Seems you have about 24.5 seconds of preparation / metadata calls and the actual select takes only 0.1 seconds. The rest is data retrieval time. Larger blocksize may help some, but it's probably time to get the IT network support people involved.
Thank you Fred!
The DBA tried creating another Oracle gateway with the following settings, however the performance did not seem to be improved. What else can we try?
Thanks again for all the helpful suggestions!
The only other thing I can think of as far as the gateway itself would be to increase MaxRespSize (in odbc.ini) from default 65536 to 1048576. But if there is some sort of network issue that's unlikely to help much.
Someone needs to check the network for packet loss, retransmission, congestion, etc. between the two systems.