By increasing the data block size can we improve the performance of complex report queries. What are the pros and cons of increasing data block size?
Has anyone noticed improvement in performance by increasing data block size?
Increasing Block size could help when retrving bulk contiguous records, as more # rows fit in a huge data blocks, we can save on IO. At the same time it has its minuses, if the the number of rows we retrive is very less and data is not contiguous, it will pull all the huge data blocks that meets the query and it occupies more space in Spool and also the processing.
It will NOT increase spool size. We always qualify from the source table before writing to spool so the block size of the table does not affect the size of the spool. It is correct that if a query is very selective, more data may have to be read from disk in order to satisfy the query. Eg reading a single row will have to read 1Mb from disk rather than 128K. If there are a lot of these very selective requests and few larger scan/join requests then smaller block size is indicated. For scans and large joins, getting more off the disks with a single IO helps performance. The mix of the workload using the table needs to be considered when deciding whether to leverage the large blocks.
I have a question which may fit this topic. Assuming we have very large table rows, let's say only a few rows fiting into one data block of 127KB. If i apply an ALTER TABLE statement, adding several columns, will bigger data blocks (for example 1MB) lead to a better performance in this case?
My assumption is that if each data block is filled up with a few rows, adding new columns might cause a lot of block splits, etc? Would increasing the size of the data blocks to 1MB improve the situation? Or are my assumptions wrong?
Hi Roland, I Guess your assumption is right for Alter, Bigger Datablocks store more rows and so number of Data Blocks used will also be less, so will be the the Data block splits as well.