I understand that the CPU utilized by a query can be determined by adding AMPCPUTime and ParserCPUTime. If I get a value of 100 does this mean that the CPU was processing the query for 100 seconds? Or is the number represented in some other unit?
Also, if the TD system has 50 AMPS, can we say that each AMP was processing for 2 seconds assuming even data distribution (no skew)?
Finally, is there direct relation between CPU time mentioned above and the CPU cycles?
The reporting unit of measure for CPU times is seconds.
AMPCPUTime is the sum of the CPU usage for all participating threads on all participating AMPs.
The CPU time is actually calculated by a sampling method, but in theory it should be directly proportionate to the number of CPU cycles consumed.
I am not sure where you get the AMPCPUTime and ParserCPUTime .In my opinion it is best to calculate it on Cycles I think QryLog table should be able to give you that.
Thanks for the inputs.
Can we find out the CPU cycles consumed by a query, directly or indirectly? AMPCPUTime and ParserCPUTime are available in DBC.QryLog view but I couldn't find the CPU cycles.