"Best format" depends on what you intend to do with the data. Delimited text is probably the most standard / portable option and GZIP compression will decrease the amount of data that needs to be transmitted.
The S3 access module will only write directly to S3-compatible storage and can't be piped anywhere. You can potentially write to a local pipe and have something else reformat and write to S3.
Or you could perhaps have TPT "write" to a Hadoop instance (via TDCH) or even a Kafka instance (via Kafka access module) and set up the receiving side to reformat / store as Parquet. Lots of "moving parts" to install / configure and tradeoffs to consider for those solutions - probably too much for a forum answer.