We're looking at reporting on CPU usage and trying to understand the difference between the CPU metrics in each table. We have PDCR to archive the acctg table in hourly and daily blocks.
I understand that CPU in DBQL is associated with a query and not a time period. Also, since we are using TD 13.10, DBQL does not log resources for the last step of aborted queries.
ResUsage reports time in specific time intervals, and Acctg reports CPU as steps complete on each amp. I believe Acctg does not include parsing CPU, but that is typically a negligible amount.
My question revolves around the classification of "User" and "Server" CPU. ResUsage differentiates between the two. My understanding is that DBQL only reports "User" CPU. I'm not sure about Acctg. When comparing Acctg with Resusage for a day, it might look something like this when the CPU is summed.
Acctg: 100 Million
ResUsage User: 85 MM
ResUsage Server: 35 MM
So, Acctg will be significantly more than the user CPU, but significantly less than the User+Server CPU as reported by ResUsageSCPU. Because of this, we're not really sure how to interpret the data. Why would Acctg be so vastly different from ResUsage when aggregating over a day?
Acctg - captures CPU & IO data on AccountString/UserID level. Same data can be found in DBQL also but difference being Accounting data tend to be more correct as DBQL doesn’t capture CPU & IO data for aborted or errored out queries and also some Work by Utilities.
Acctg will be used more for deptt wise usage.
DBQL- Captures resouces usage data for Queries.
ResUsage - Captures System wide performance/Usage data related to hardware components like Nodes, AWTs, CPU etc