How to read all data from a large table with minimum locking

Database
Junior Contributor

Re: How to read all data from a large table with minimum locking


m4r3k wrote:

I don't know how it's going to be processed. Processing is not the subject of this issue. All I need is to read the content of the whole table/database as raw (binary or textual) data.


You got your answer already, use TPT EXPORT. 

 

I'm out.

Teradata Employee

Re: How to read all data from a large table with minimum locking

It was not the intent of Carrie's article to say that No Spool is only for small tables. That option can be used with any size table to avoid the spool as long as the entire table is being exported.

 

The select statement in Export allows a locking modified so the LOCKING FOR ACCESS can be specified for each select.

 

It would be valuable to us if the required processing that cannot be done in the database could be described. We are always looking for use cases like that to give to our engineers so that they can build new capabilities into the database engine - allowing the processing to be done in the engine and taking advantage of power of the parallel system that is in place.

Enthusiast

Re: How to read all data from a large table with minimum locking


ToddAWalter wrote:

It was not the intent of Carrie's article to say that No Spool is only for small tables. That option can be used with any size table to avoid the spool as long as the entire table is being exported.

The select statement in Export allows a locking modified so the LOCKING FOR ACCESS can be specified for each select. 

 

@ToddAWalter Thanks! That's very useful information. If I can use No Spool for any table then this solution looks really promissing. 

 

It would be valuable to us if the required processing that cannot be done in the database could be described. We are always looking for use cases like that to give to our engineers so that they can build new capabilities into the database engine - allowing the processing to be done in the engine and taking advantage of power of the parallel system that is in place.


The processing part does not completely care about the data structure - it expects only binary or text stream so it will not make use of the database capabilities. When I wrote before it's impossible to move the functionality to the database I meant rather that it's not feasable and impractical. It's a generic solution that supports a number of data sources, developped over the years with limitted documentation and it's own language for defining processing rules. Any attempt to migrate the functionality to use Teradata mechanisms would require enginering time far beyond any acceptable time frame.

 

Enthusiast

Re: How to read all data from a large table with minimum locking


ToddAWalter wrote:

The select statement in Export allows a locking modified so the LOCKING FOR ACCESS can be specified for each select.

 


The only problem I see now is the .LOGTABLE directive, which requires me to have write permission to at least one database, where the table will be created. I understand that under normal circumstances this table is needed to create a restart log. In my case I don't really need it as I simply can run the export again when something goes wrong. 
I would prefer to have an option of running the FastExport tool having only read access to the database. Is that possible?

Enthusiast

Re: How to read all data from a large table with minimum locking


ToddAWalter wrote:

If it is desired to always read all rows and all columns of the table, then see the NO SPOOL option of the Export operator in TPT. 


@ToddAWalter do you know if it's possible to achieve the same behaviour (avoiding the spool) running standard query from CLI/ODBC? 

I mean with the same limitation as FastExport has - i.e. only simple queries, no ORDER BY, no JOINS etc

Highlighted
Teradata Employee

Re: How to read all data from a large table with minimum locking

The NO SPOOL capability is specific to TPT Export with its multiple session data extraction mechanism.