Question.. just need clarification on below statement
.. after the data block has been read from the disk
How does this work internally? Does reading a data block mean putting this data block in memory and then from there each row hash is being checked until it gets the proper row that it wants?
I just want exact explanation on below statements:
Larger data block size has an advantage on full file query scans
Primary key queries/single row queries has a huge overhead if data blocks are big since the H/W and S/W must handle/read the whole data block.
The data block is read off of disk into memory. The data block contains a list of rowids and offsets into the block that allow access to individual rows. If a single row is being read by hash, then the rowid is looked up in the list and the row is accessed via the offset. If the rows are being scanned, then the offsets are used to walk through all the rows and compare on whatever critieria are to be tested against the columns.
There is a tradeoff of large blocks for scanning versus smaller blocks for single row operations. But it is not "huge". Unless there is a huge volume of single row operations and a much lower volume of queries that look at many rows, it is a better trade to stick with the larger block sizes.
Understood everything you said except when you mentioned offsets.. what do you mean by those offsets?
Is it an offset bit that you're referring to?