All Forums UDA
ARyan 6 posts Joined 01/08
01 Jul 2015
Generating a DSA (BAR) log file with a similar format to ARCMAIN output

I am trying to produce a log file for each DSA (Data Streaming Architecture) job with a format similar to ARCMAIN output. The closest thing I can find is to redirect the output from the dsc command line to a file (.i.e. dsc job_status_log -name my_DSA_jobname -bucket n > my_log_file). The problem is I can't find a way to specify 'all buckets' where there is more than 1 bucket of data to display (which is always the case). Can this be done? If not can I generate the log file in a different way ?

seven11 26 posts Joined 12/09
01 Jul 2015

Hi ARyan,
Not 100% sure but have you tried running the command sans the "-bucket <n>" parameter
From the DSA manual for dsc job_status_log "b|bucket [Optional] Select a bucket number to display a grouping of data when there are too many results returned to display at once."
ie. you only need the bucket parameter when you want to limit the output

ARyan 6 posts Joined 01/08
02 Jul 2015

Thanks for the tip - but I have already tried omitting the bucket parameter.
In that case only the first bucket is displayed.
What is printed in the output though is "There are "39" buckets. Use the bucket paramter to specify which bucket to display."
Until a better solution can be found I have written a bash script which captures the total number of buckets from the 'dsc job_status_log -name DSA_jogname' output in a variable (n) and then loops n times calling dsc job_status_log referencing a different bucket each time and outputting the lot to a log file.
But surely there is a better way! I am surprised this functionality does not come as standard. If there is no better way then I hope this will be introduced in a future release of dsc.

Rbar 7 posts Joined 10/04
15 Jul 2015

  Any news on running incremtnal backups?

ARyan 6 posts Joined 01/08
19 Jul 2015

Hi Rbar,
I haven't used the incremental backup functionality yet. A full system backup takes less than 10 minutes at the moment so I don't think the pain of applying incremental restores is worth the effort at this stage. If the full system backup time increases to more than an hour I think it would be very useful.

seven11 26 posts Joined 12/09
20 Sep 2015

Probably far to late, found/tried this out yesterday...
If running on a standard Teradata installation/platform the backup servers will have the GSC-tools installed on these boxes nb. GSC-tools are generally for break-fix operations and have limited/no support.
One of these scripts called dsaextract with the "-l" (lower case L)
# dsaextract -j <job name> -l
This will dump out some general job information on the most recent job run (if you need it for prior runs you will need to give it the Execution ID for that run) basically it finds out the last bucket value and then does a for-loop to pull out all of the pages so it can take a while for jobs that have a lot of pages (few seconds per bucket with 100 lines per bucket) into a single text file.

You must sign in to leave a comment.