We are working at implementing Fallback within our data warehouse. We are committed to embracing the technology. The command is simple enough. We have even done tests on a few select tables to monitor and compare performance impacts.
For our first step/phase we have singled out an application and will be altering at the table and database levels. We are planning to alter just over 300 tables spanning over 25 databases and occupying just about 10 gig.
My question to the forum is, Is a backup/archive of tables getting altered in this nature necessary or not?
I understand the DBA role and value my job. As we say it’s always better to be safe than sorry. And in this case 10 gigabytes is small enough. Hence I will archive them just to play it safe. But I do have terabytes to do later and windows of opportunities could be limited. So I started to investigate the true need and have not found anything anywhere that I’ve look. I’m just curious if an archive is truly necessary. When you think about it, the table is actually not getting changed. You are simply giving instructions that a new table be created on physically isolated disks and intelligent software is updated to copy/update/delete data there.
Being unfamiliar with your system I am curious why you ARE using fallback?
You did not say if fallback will be applied to all your tables or only a few tables. Generally, fallback is usually only used on critical tables since it will double your disk usage. All the shops I've been at have used FALLBACK on an extremely limited basis. Archive to tape is cheap, but Teradata disk space is not cheap so that is a very important factor to consider. In addition Teradata has before and after journalling that could be used instead of fallback.
To answer your question specifically, your concern should be about as much as if you were doing an insert select into a new table. Not more or less. The fallback logic will only read the primary table and index rows and duplicate them into the new fallback sub-tables of that table. For a 4 AMP cluster each AMP will get about 1/3 of the rows from the other three into each sub-table of which there are three for the primary rows and three for each index defined on the table. Your table will take up twice as much space. Make sure perm space is adjusted to accommodate that increase. If there is not enough space then the transaction to add fallback will roll back. If the system restarts in the middle (for some other reason) the fallback will be undone just like any transaction would be.
The decision to utilize the fallback option of Teradata was the result of a long process to identify a solution to a critical need taking into consideration all available options. That included archives to tape and journaling. We are a petabyte shop with a large budget and select applications within our warehouse have deemed the fallback option their solution. For those applications we identified select tables that will get fallback applied. If by chance all within a database then we’ll alter at the database level.
Thank you rgs, great reply to my question. That’s basically what I thought too. It’s just nice to hear it from someone else. And thanks for the heads up regarding space. I am aware that my requirements for tables getting this option will double and reserved a little over 107 terabytes for our commitment.
One other note: MODIFY ... AS FALLBACK "at the database level" only means FALLBACK will become the default for any new CREATE TABLE in that database. You would still need to individually ALTER each existing table within the database.
Thank you Fred for pointing that out. I did read that in the SQL Reference Data Definition Statements manual and saw it while testing. I can understand why but would have like the option of changing all table if prompted for it. Oh well just a few more alters.....