I am getting error 3933 : The Maximum possible Row length in the table is too large . The table contains around 800 columns that is being pulled from Greenplum as source .
Will increasing the block size help handle this scenario ? If yes, how to do that ?
The 3933 error will not be removed by increasing the block size. Perm and response rows are currently limited to ~64KB. This is the limit which is exceeded. The source table must be split into two, or more, tables.
A review of data types may be in order as well. If there are a lot of char/varchar columns, review the lengths to see if the lengths defined are really required for instance.
We had reviewed the column lengths to consider the size really needed for our data , still it is exceeding the row length limit and throwing error 3933 . Is there some other way other than splitting the table into smaller tables ?
as Greenplum is based on PostgreSQL which has a much higher record size limit, so you probably need to split the table (or start normalizing it).
Large objects only take up about 50 bytes in the base table row for the object identifier as the data is stored in subtable rows. This may avoid the 3933, but introduce other issues when processing LOBs. Refer to chapter 14 (database-level capacity planning) of the 'database design' reference and the 'data types and literals' reference for more details.