unable to create a table with 600 columns

Database
Enthusiast

unable to create a table with 600 columns

Hi

we are trying to create a table in Teradata 13.10 version with 600 columns for reproting purpose .

But we are unable to create the table ,the same type of table we created in Teradata 13.0 version.

can any one would please suggest me on the same .

6 REPLIES
Enthusiast

Re: unable to create a table with 600 columns

What error message is being returned? Teradata 13.10 supports 2048 columns so it isn't necessarily the number of columns that is reason for the failure.

Enthusiast

Re: unable to create a table with 600 columns

Hi

we tried to create a table with 600 columns , now the table is created .

 after inserting the data into the table and selecting the data from that table we are facing an error.

error message  "specified argument was out of the range of valid values. parameter name = responseMsgSize.unexcepted error."

any one can please suggest  us on the same.

Enthusiast

Re: unable to create a table with 600 columns

any one please give me a suggestion on the same

Enthusiast

Re: unable to create a table with 600 columns

Any suggestions from any one

Enthusiast

Re: unable to create a table with 600 columns

 Sounds like an OS error to me, did you get a Teradata error number? 

Try googling around, the last bug on this list sounds like your issue. 

http://developer.teradata.com/doc/connectivity/tdnetdp/13.11/webhelp/WhatsNew13110000.html

Can you do a select from BTEQ or Fast Export?  Other thoughts - is block size the same, did you set compression, did your TD13 DBS control settings get migrated, etc?  If it's not an issue with the client tool it's time to open an incident with GSC.

Enthusiast

Re: unable to create a table with 600 columns

 This is going to sound like a rant, but I've not yet encountered a legitimate need for a 600 column table.  These usually come from SAS / Java requirements, or weak business analysis ("I don't know exactly what I need, so I'll put everything up there and let the users figure it out").   It is a relational database and relation == join, if you want something that looks like an excel autofiltered spreadsheet then you should be using a tool like BOBJ or MSTR.   The data architect or physical design team should have intervened.  Analytic data sets can be implemented with views, and if you need to materialize for performance I would still materialize only the frequently used columns.

 A 600 column table is going to have performance problems due to row size (say 5-10 rows per IO) and spool.  I've dealt with "reporting" tables as 4 companies, and the experience has been somewhere between problematic and absolute disaster.  The wide tables typically require costly, inflexible ETL which pushes data level attributes into the DDL.  The wide tables also cross-connect data so that ETL maintenance is costly and regression testing is a challenge. 

  I recommend analyzing  the column level usage rates from the dbql object log for non-ETL users and assess whether the design is  reallyworking for you.  You need to look at non-ETL because ETL always touches all columns.  My experience has been that either 1) a small subset of the columns are used - as little as 2% and no more than 50 columns, or 2) the users are summing columns into more generic variables, basically unwinding the crosstab. 

Some suggestions for performance:

1).  Create an STJI on the PK columns, these will be used by the joins so you don't have to push the entire wide table into spool.

2)  Vertically partition the table and link the segments with soft RI.  This approach will give you more modular & supportable ETL code as well.  Alternatives include splitting into Dimension & Fact attributes, Hot & Not, source system, SLA / non-SLA.  If you have different time grains for facts such as trailling 12 months and ytd then split on time grain.  You may well have all the tables you need as working tables, just leave the separate and don't club them together into a final denormalized table.  The downside is that the keys have to be repeated for every table segment, requiring more space.

3) If you can't split the table create an STJI with just the frequently used attributes, using the same PI as the current table.  The JI will be used for most queries and can provide significant cpu savings.  My experience has been that ETL cost for a JI with the same PI is less than a 5% hit.

4)  Do not allow the table to be used as a source for any downstream ETL.  The disaster experience was a similar wide table that was used to source downstream reporting and analytics.  The net effect after 4 years was an implementation that required 3x the space and 4x the cpu as a reasonable design.  In the end a floor sweep upgrade was required - it was cheaper than a reimplementation.  On an enterprise scale platform this can be a multi-million dollar decision.