I've been working with a proof of concept project, setting up DSU as a replacement of our current usage of ArcMain. One stumbling block I hit was that I was unable to find any documentation as to the relationships between the parameters for Streams in the "Systems & Nodes" section of BAR Setup in viewpoint, the "Backup Solution - Disk File System" specification of Max Open Files, and the "Target Groups - Remote Groups" specification of the Disk File system Open Files. While I have talked to a few people supporting BAR that I'm certain understand all of this inside-and-out, questions relating to these relationships have tended to get funnelled to lower-level staff that have been very unhelpful.
So I'll post what I think I've found and maybe it might help the next person trying to configure their dsc system. Feel free to add clarifications and corrections or additional insights.
Relationships between Streams and Open File settings
Default Stream Limits:
"For each job on a node:" This value, times the number of database nodes equals the number of datafiles that will be produced in each backup. The total number of files created is equal to the this value, times the number of database nodes times 2 (because each file gets a ".meta" file) .
"For Each Node" is the maximum number of discrete transfers allowed for a node. This will pose an upper limit for the number of transfers. It must be equal to or greater than the the number of streams for each job times the number of desired simultaneous backups.
Disk File System Details-File system:
Max Open Files: This is the maximum number of files that can be opened by dsc to accommodate all concurrent backups. This must be equal to or greater than the number of backups times the number of streams per backup times the number of nodes (possibly times 2 to accommodate the data and .meta files)
Target Group Targets:
Disk File System Number of Open Files: This appears to be the number of files that can be potentially opened per backup. Example: If the Backup Solution Max Open files is set to 1000 and the Target Disk File System Open files is set to 250, then 4 simultaneous can run. Note: This must be equal to or bigger than the number of data streams per each job on a node times the number of nodes (possibly times 2, depending if the data file remains open which the .meta files are being written). This ratio between file system open files and Target disk File System Open files has to be managed. I believe dsc only supports 20 backups maximum, but your dsc server may not support that many. I've kicked off too many backups due to poor choices for these parameters which results in all the backups show as active but will never make any progress.
Think this depends a little on how you have set things up and/or are running your DSA jobs
It's been a while since I have looked at DSU so a lot of this is from memory
the .meta file is per job
As this is DSU the OS level Open File Ulimit is going to affect how many files you can have open at one time both from a source & target point of view
The term Streams is overloaded a bit I try to split this in two "AMP Streams" ie data stream from the RDBMS AMPs to DSMAIN and "Output Streams" stream that will hit the backup storage devices
You mention 20 backups maximum, this makes me think you are running a bunch of single stream backups. What you are more likely hitting is the system hardlimit, when a system is added to the DSC a default softlimit & hardlimit is defined. The softlimit by default is equal to the number of AMPs per RDBMS node and the hardlimit by default is usually twenty times that value, both these numbers are routinely lowered down.
Like all previous Teradata multi-stream backup solutions this works best with groups of larger objects but this needs to be balanced with what objects that can be concurrently locked.