I don't know if there is a standard for this or not. What I have done in the past is to take the annual cost (or a shorter period if you don't have a year's worth of data) and divide it by the number of CPU seconds and IO's used in a year to get to the cost of a CPU second and an IO. I have always used the cputime and diskio columns from DBC.Ampusage to calculate this so that nothing was missed.
You'll need to decide what to include in annual cost (is it just machine cost or does it include DBA cost, etc?) and the ratio to use for charging for CPU vs. IO. If your machine is balanced (i.e. you hit about the same number of CPU bottlenecks as IO bottlenecks), you could go with a 50/50 ratio CPU cost to IO cost. If the CPU is your bottleneck, you may want to charge more for CPU than for IO by doing a 60/40 or 70/30 split.
Once you decide that, it becomes simple math (this assumes 60/40 split):
Cost per CPU second = (annual cost * .6) / CPU seconds used in one year
Cost per IO = (annual cost * .4) / IO's in one year
There are always things that can complicate this, such as machine upgrades that occur during the year (CPU seconds and/or IO's might not be eqivalent throughout the year). In addition, if you have skewed queries or processes, those actually would be using up more than they would be charged for. So, you might have to make some adjustments to this, but it would be something to start with.
Thanks for you suggestion.
I am not sure if I would have access to the annual cost related information, but no harm in trying to find out.
Had there been any standard benchmark available it would have helped because I only need to calcuate the approximate cost saving due to performance improvement.