I am having trouble meeting the SLA of tactical queries. We are using applicance 2750 , 2 Node system with 120 AMPs. We have an MQ Series Integrator that sends stored procedure calls to the database. Each Stored Procedure generates approximately 100-120 single amp tactical queries (using primary index) these are mostly deletes and inserts (apart from some regex functions on the input parameter values passed). Typically we receive around 20,000 messages on a daily basis - around 2000 per hour during the business hours. We are not able to process all these requests on daily basis and queue builds up and gets emptied only during saturdays when the call volume is low.
I implemented TIWM and assigned all these to tactical workload. BTW, we are using SLES 11. But i dont any improvement at all even after implementing the tactical workload. we are only able to process around 500-800 messages on average per hours (against 2000 received). The average queries (tactical) submitted to database per hour are around 160K transactions. Most of these run under a second (typically 0.02 to 0.9 seconds). I checked resusagespma table and we almost have free cpu available all say (no flow control situation). Some of things that i cant get around my head is that, same query for instance :
and by the way there are only two workloads defined on the system - Default and tactical for MQSI tactical queries.
Delete from A where PI='xyz';
1) The same query takes some times 0.01 seconds and during some other time 0.8 seconds. Thats a plenty of free cpu time ! (atleast 30% cpu free)
2) The numbers of messages processed per hour keeps changing , during night when there is less activity it goes up 1200 messages per hour and comes down to 500 during day when system is relatively busy. But when these are tactical queries why is this affected by other work on the system?
Is there any thing else i can do to process 2000 messages per hour ?
Thanks in advance.
Did you check if you're I/O bound?
How do you submit those 100-120 single amp tactical queries within the SP, serially or in parallel?
Do the Inserts/Deletes process the same or different tables?
All these single amp queries are run in serial. when the message comes in, there are about 10-20 tables where delete is submitted and then insert the new record (after some validation is done on the input paramaters, for ex: check if the time and date are valid) into bunch of tables.
Is there any query to check if the system is i/o bound?
Also, i have noticed that many of these queries are having some initial delay,
(FirstStepTime-StartTime) Second(4,6) -> 0.01 to 0.8 seconds and these do not have any corresponding workload delay. How do we interpret this delay ?
If you are looking at ResUsageSpma, did you check ProcBlks*/ProcWait* columns?
Flow control is possible even if you have CPU available. Did you check ResUsageSawt? Do you throttle the concurrency of non-tactical work?
How many sessions are sending SP calls to the database concurrently? Just one? Do sessions stay logged on for many SP calls, or do you log on a new session for each call?