Online Archive is about the same under DSA as it is ARC, there is a Teradata Orange Book on the subject avaiable via "Teradata at your service"
Online Archive will take more resources from all over the place
While I'm sure spool it being used all over the place I would think the size of the sub-table is probably going to be of more concen
How much space will it take up? While 20% additional is bandied about the two major factors are going to be duration of the backup process and the rate of change in that window, theoretically the sub-table could become bigger than the base if allowed to.
On a restore the base table is restored and then sub-table, using the sub-table the before images are applied back to the base tables so the object/set is back to the state when the sync-point was established.
Not sure what you are asking in your second point
There are mutltiple types of compression within Teradata (again there is an Orange Book on compression which might be handy) MVC & ALC stay compressed with ARC and DSA, while for BLC;
- for ARC the database will uncompress the data before sending it out
- for DSA the BLC will be kept compressed
No spool is used during DSA backups. The tables are not spooled, they are copied directly out to the backup environment. The changes are stored in a subtable as part of the permanent table until such time as the backup of both the table and change log are sucessfully backed up to the backup environment.
The guideline for a small change percent is twofold:
- The log will get large if a large amount of change is applied while the backup is going on, increasing the perm space needed to store the table.
- A restore will take longer to apply the change logs, especially if online backups are done for a number of days.
When the change percent gets a lot higher, it is cheaper and quicker to backup the table rather than doing the online backup.
Thank you both for clearing up.
Regarding archive loggs:
Is there anyway we could run both FASTLOAD AND Backups parallely on the same objects?
Trouble is DBC needs to take archive logg on the objects( during online backups) it supposed to backup; however if FASTLOAD is running a load job then it takes explicit archive log and then DBC skipping objects which lead to BACKUPS COMPLETED WITH ERROS. Is there anyway to go around this.
If Above is a NO,What if we make sure FASTLOAD is running exclusively after the Backup job? While backup job continues, if the FASTLOAD is kicked off; will it go to blocked state or fails?