This article is the official release announcement of Teradata Viewpoint 14.10 with an effective release date of May 6th 2013. With new enhancements in Alerting, Workload Management and Monitoring areas, this release of Viewpoint 14.10 continues to expand its scope and provide ability to monitor Hadoop systems along with Aster and Teradata systems.


The primary themes of the Viewpoint 14.10 release are to provide front end and visualization for new Teradata Database 14.10 features and Hadoop system monitoring. There are enhancements in Alerting, Monitoring and Management areas. Following are the highlights of Viewpoint 14.10:

  1. Stats Manager
  2. Hadoop System Monitoring
  3. Workload Management enhancements (Group throttle, New classifications, ability to unlock rulesets, etc..)
  4. Reports in the Query Monitor portlet
  5. Alerting Enhancement

Browser support has also been updated to reflect support for Firefox 18, Chrome 24, Safari 5.1, IE 8.x and 9.x.

Stats Manager

The Stats Manager portlet complements the Auto Stats feature of Teradata Database 14.10 and will work with relaese 14.10 and later. Stats Manager allows DBAs/Users to efficiently manage their stats collection process. It is a new Tool option in Add Content | Tools menu.

Before we go into details of this new feature, let’s discuss why this is needed.  Accurate cardinality and cost helps Teradata optimizer to decide an optimal plan. Statistics provides cardinality information to Teradata optimizer. Cardinality changes significantly with bulk load jobs making stats stale and inaccurate. Some times it is even challenging for an experienced DBA to understand which object stats would be beneficial which can result in collecting extra stats or missing collections of critical stats. Collect stats jobs usually are resource intensive jobs as they have many collect stats statements; it is always good to know what is needed and what is not and save some CPU cycles. Due to scheduling issues the user may not have enough time to complete the collect stats job and there is a need to prioritize and run collect stats for important or stale stats first. Stats Manager tool simplifies some of these tasks and help users automate the stats collection process. The Stats Manager portlet can be used to:

  • View statistics on a system
  • Schedule statistic collection jobs
  • Identify missing stats
  • Detect and refresh stale statistics
  • Identify and discontinue collecting unused statistics
  • View when statistics were last collected and are scheduled for collection again
  • Set priority of collect stats statement with regards to to other collect stats statement
  • Shows CPU Utilization of collect stats jobs allowing the user to analyse if a particular job consumes more than anticipated amount of CPU.

There are two main tabs in Stats Manager – Statistics and Job.

Statistic Tab

The Statistics tab shows all objects (e.g. databases and tables) on the system, that have at least one statistic or that has at least one outstanding recommendation. The user can drill down on the data grid to navigate between the database, tables and Column. Figure 1 is example of Statistics by Database view.

Figure 1

Actions has three options - Automate enables statistics to be collected by collect jobs. Deautomate stops statistics from being collected by collect jobs.  Edit Collect Settings  allows the user to edit thresholds, sampling, and histogram settings. The information bar displays the percentage of statistics that are approved for automation, allowing the user to determine if more statistics need to be approved for automation. Percentage of automated stats have collect jobs allows the user to determine if additional collect jobs are needed. Recommendations display a list of the recommendations by an analyze job.  By clicking the link the user has an option to approve or reject recommendations given by analyze job. Statistics Table displays all objects with at least one statistic, or one recommendation that has not been approved or rejected. The table is configured using Configure Columns from the Table Actions menu. The user can automate any objects for stats collection process in this tab. This allows the user to approve statistics for collection by collect jobs. The user can also view Statistics detail reports by drilling down to stats object, see Figure 2. 

Figure 2

Job Tab

The Job tab displays the list of user-defined collect and analyze job definitions. From this view, the user can create collect stats and analyze jobs, manage existing jobs, and review job reports. Figure 3 represents the top Job tab layout. Actions has three option - New Collect Job enables user to define a job to collect statistics, New Analyse Job enables user to define a job to evaluate statistic use and make recommendations and View History lists the run status and reports for collect and analyze jobs over time.  

Figure 3

Job Definitions Table displays summary information about jobs and allows drill down to show the details. Job Schedule displays a nine-day view of jobs that are running, scheduled to run, or have already run. Mouse over a date to have it show the list of jobs.

A Collect job generates and submits COLLECT STATISTICS statements to the Teradata Database for objects that were approved for automation in Statistic Tab. The user can assign a priority to individual COLLECT STATISTICS statements. see Figure 4.

Figure 4

The user can schedule a job to run for limited time and then have a new schedule to resume the job at a different time of the day (See Figure 5).

Figure 5

An Analyze jobs option allows the user to evaluate statistics status and get statistic-related recommendations. Analyzing objects enables the user to determine where additional statistics might be useful and identify existing statistics that are used frequently or are stale. Once the recommendation are generated the  user can review and automate the object for stats collection process in Tab. See Figure 6 for various functions that Analyze job can perform.

Figure 6

The Viewpoint Log Table Clean Up feature can be used to cleanup job results stored in DBS TDStats database.

Hadoop System Monitoring

Teradata Viewpoint 14.10 supports Hadoop system monitoring for Hortonworks provided Hadoop solutions packaged as part of Aster 3 Big Analytic Appliance. A new Hadoop Services portlet allows users to monitor status of various services running on the Hadoop systems. Using expandable service view on MapReduce, HDFS and HBase users can view key metrics details for the selected services (See Figure 7).

Figure 7

The Aster Node Monitor portlet has been renamed to Node Monitor as it now monitors both Aster and Hadoop systems. Using the Node monitor portlet for Hadoop systems, users can view node level metrics, available Hadoop services, and the status of services for each node on the system. User can also view hardware statistics details such as CPU usage, memory usage and network activity. Navigating through the Hadoop system topology, users can also view detailed service component and JVM metrics for the HDFS and MapReduce services. (See Figure 8)

Figure 8

Like Aster system monitoring, Hadoop systems monitoring was also integrated with existing portlets. The usability, look and feel of the portlets were maintained but underlying data and metrics corresponded to the monitored Hadoop system. Below are the existing portlets that were modified to support Hadoop system monitoring:

  • Alert Viewer – View all the Alerts logged for Hadoop systems.
  • Capacity Heatmap – Displays trends for key metric usage related to system, HDFS and MapReduce.
  • Metrics Analysis - Displays and compares trends for key metric usage related to system, HDFS and MapReduce in a graphical format across different Hadoop systems.
  • Metrics Graph – Displays trends for key metric usage related to system, HDFS and MapReduce in a graphical format.
  • Space Usage – Monitors space usage on a Node such as total space, current space, percent in use and available space.
  • Admin – Provides the ability to add Hadoop systems and define Alerts for Hadoop systems.
  • System Health- Hadoop systems can be identified a “H” in the system's icon and drill down shows all the key metrics related to Hadoop system. See Figure 9

Figure 9

Reports in Query Monitor

In Viewpoint 14.10 we added three new reports in Query Monitor.

  1. Multi-Session report: New option in Query Monitor By Utility|By Job was added to display all the utility jobs that are running with drill down capabilities for individual sessions logged on by a particular Utility Job and the ability to further drill down to see session details.  (See Figure 10)
  2. Hot AMP report: A new option By Vproc|By Skewed AMP displays AMPs with most skewed sessions that exceeded the CPU skew threshold set in the PREFERENCES view. (See Fig 10)
  3. By PE report: A new option By Vproc|By PE  displays total number of sessions logged on to the PE and CPU value for the PE. (See Fig 10)

Figure 10

Teradata Workload Management enhancements

Teradata Viewpoint 14.10 introduced group throttles where a user can define throttle on a group of workloads. We also added new classifications by UDF, UDM, memory usage and collect stats. These features are dependent on Teradata 14.10. In Teradata Viewpoint 14.10 user can now unlock any ruleset if they have the appropriate permissions. Users can now also model a system Ruleset this is useful for comparing the Workload management features for different platforms (Appliance v/s EDW) or for different versions of Teradata.

Alerting Enhancement

Various new Alert options and Alert type were added in this release of Viewpoint. 

  • An option to send an alert for Teradata Database restart was added.
  • In Session alert include or Exclude users option was added. If user wants to define a session alert for small set of users they need not add other users to the exclude user list instead include user option can be used. It also supports splat wildcard. (See Figure 11)

Figure 11

  • Users can now send an alert for long running sessions using newly added Active time alert option in Session Alert type.
  • Spool space (MB) alert option was added in session alert to send an alert if a session uses more than anticipated amount of spool space.
  • Delta I/O (logical I/Os) alert option was added to send an alert for a session consuming excessive logical I/O during the last collection interval.
  • In Database Space alert type users can now specify threshold for Current Spool Space (%) and Peak Spool Space (%) to send an alert when Current Spool Space and Peak Spool Space exceeds the threshold. Splat wildcard support was added to Database space include/exclude user list
  • A new Alert type Table space was added late in the Viewpoint 14.01 release with a new alert option on DBC.TransientJournal table with ability to specify current perm and skew threshold.

Lock Logger

In Viewpoint 14.10 we modified the Lock Logger architecture for Teradata Database14.10 and follow-on releases. When Viewpoint 14.10 is used with Teradata Database 14.10 the Lock Info collector uses the data written to the DBQL Lock Log table to capture lock information therefore DBQL query logging must be enabled with the “WITH LOCK” option.

Finally, please refer the associated Viewpoint Configuration Guide for details of the upgrade process and the User Guide for details of new features.

We continue to have a voluminous release with copious features across a number of strategic areas. Hope you avail the new additions and improvements in Teradata Viewpoint 14.10. We always look forward to your thoughts and comments.

jjudge 4 comments Joined 07/09
14 Aug 2013

Hi Shrity,
in Stats manager is entering stats job information all manual (i.e. is there a way to initialize the list from say DBQLog)?

Shrity 61 comments Joined 05/12
20 Aug 2013

Hi Jim,
Not very sure if I understood your question correctly. If you are saying how can I auotmatically generate collect list. You can choose to generate the list automatically by selecting the "Automatically generate the collect list" option in Collect list tab when you do new collect job. If you are saying how can I automate collect stats for entire system, in the Stats tab -> Action you can choose to Automate at the entire system and then create a collect job using Job tab. Let me know if this is not what you are looking for.
Thanks, Shrity

kerrg 2 comments Joined 10/10
15 Sep 2013

Utility limits
We have just upgraded our dev viewpoint to test the new workload manager features that we where lead to believe where available. At present you can only have 1 pipe for utilities as the clasification tab does not allow you to select anything other than utility type.

In our case we limit utilities to 15 during the day. We would like to split this between workloads so say a particular APP has 5 slots and all others have 10. From my knowledge this is not possible in 14.0.  We where lead to believe it is available in 14.10.

From my tests it appears no change has been made to 14.10 to enable this feature.

Is there a way to create untiltiy pipes with different allocations?

Shrity 61 comments Joined 05/12
18 Sep 2013

There is no change with regards to utility limits from 14.0 to 14.10. Are you trying to set up workloads instead of utility limits? You can set up workloads with request source and utility limits classification criteria, and then adjust the throttles per state. In this release release you can define throttles on group of workload.
Thanks, Shrity

geethareddy 145 comments Joined 10/11
25 Oct 2013

Great article Shrity.
We are going to upgrade to 14.10 DBMS (and Viewpoint is already upgraded to 14.10) in next couple of weeks. I am eager to make use of this. But i do have my concern interms of impact. As Stats Manager make use of the DBQL options especially XMLPLAN, STATSUSAGE, and USECOUNT options for the objects being analyzed, are there any downsides of using Stats Manager by multiple people?
And is it a good idea to open the stats manager access to other teams apart from a DBA team. And is this portlet falls in to High/Medium/Low impact set of portlets?

Thank you,


Shrity 61 comments Joined 05/12
28 Oct 2013

Thanks Geeta,
To answer your questions..
If you referring to managing stats for same objects by multiple people, then there is no downside of using Stats Manager by multiple people. You enable XMLPLAN, STATSUSAGE, and USECOUNT once for the table/object.
Stats manager is a tool for managing and scheduling your system Stats collection process. This is usually done by specific individuals like DBAs so my recommendation is to limit it to the DBAs or whoever is in charge of looking after system stats collection process in your shop. If you are looking for opening up stats recommendation/analysis type of functionality to developers wait for next release of Teradata Studio.
Stats Manager portlets are Medium-low impact portlets...

geethareddy 145 comments Joined 10/11
28 Oct 2013

Thank you Shrity.


sathishreddy 1 comment Joined 11/13
27 Nov 2013

Thankyou Shrity, this helps a  lot..!


geethareddy 145 comments Joined 10/11
28 Dec 2013

Hi Shrity,
As per TASM availability is concern, EDW platform has complete TASM features where as Appliance platform has restrictive features (no exceptions, WLs (category 3)). I heard about IWM option for SLES11 and SLES10. As per my understanding,
the SLES 11 IWM system has no options for ruleset exceptions or non-tactical workload exceptions. IWM systems only use tactical workload exceptions.
The SLES 10 IWM system has no options for exceptions.
What is special about the IWM feature? I can fee it is almost like Appliance platform features. Please help me with the difference b/w IWM feature for EDW platform and IWM feature for Appliance platform?


pradshank 2 comments Joined 12/13
08 Feb 2014

Hello Shrity,
Very nice article.

Shrity 61 comments Joined 05/12
10 Feb 2014

Thanks Shank !!

MarkVYoung 20 comments Joined 03/12
10 Feb 2014

Can you advise if there are likely to be any issues with using Viewpoint 14.10 against a Teradata 13.10 RDBMS? I know that certain new features will not work, but will all previous functionality from Viewpoint 14.00 still be available?
Is there a documented process, available to Teradata customers, that explains how to setup Viewpoint clustering? Do both systems have to be the exact same patch versions for clustering to work?

Shrity 61 comments Joined 05/12
11 Feb 2014

Viewpoint 14.10 supports TD 12.00 and above so you should be good. Your understanding is correct you will not be able to leverage new TD 14.10 specific features but it is compatible with lower version of TD.
You can find Viewpoint Cluster setup information in Configuration guide.. Checkout
 I recommend you discuss with your local CSR before implementing.

MarkVYoung 20 comments Joined 03/12
11 Feb 2014

Thank you for the confirmation. We will engage with our local Teradata team.

pradshank 2 comments Joined 12/13
18 Feb 2014

Hello Shrity
Please do post smiliar articles on TASM and DataMover. Your article about Viewpoint I find it very helpful.

anilpalcs 1 comment Joined 03/14
30 Mar 2014

Thank you for the confirmation.

rr186010 2 comments Joined 01/11
07 May 2014

Hi Shrity,
We are on 14.10. Can you tell me the correct "with lock" option statement use used in the DBQL query logging statement. I am using the following Query Logging statement but it will not recognize any WITH LOCK option.

When Viewpoint 14.10 is used with Teradata Database 14.10 the Lock Info collector uses the data written to the DBQL Lock Log table to capture lock information therefore DBQL query logging must be enabled with the “WITH LOCK” option.

Thank you

Shrity 61 comments Joined 05/12
08 May 2014

Below is the SQL...

LOCK=n     log any lock contention that exceeds n centiseconds

           in XML format in DBQLXMLLOCKTbl. Minimum acceptable

           value for n is 5.

For most of the SQL you can use below BTEQ SQL to know the syntax

help 'sql <SQL>';

In this case it's 

help 'sql replace query logging';


Roopalini 31 comments Joined 05/08
13 Jun 2014

Hi Shrity,
W are on viewpoint 14.10 version. Our Teradata machine is on 13.10. Ours is a multitenant environment and hence the DB space usage alerts will need to be sentto specific users. When I try to include or exclude the databases  from the alerts, it gives me a description in the alert (like the one I have mentioned below)
ERROR:UNKNOWN:database name is low on space!
Event Timestamp: 2014-06-13T12:08:07.150-04:00
Database Name=BAKTHRO
Space Used Percentage=69.573845
Max Space Used Percentage=78.10744
Current Spool Space=0.0
Peak Spool Space=0.0
Description: (Space Used Percentage > 60.0 and (Database Name = BAKTHRO or Database Name = BAKTHRO_TEST))
Can I avoid the "(Database Name = BAKTHRO or Database Name = BAKTHRO_TEST)" being mentioned in the description? When I have many DBs excluded/ is added as a part of the description and I doesn't look good in the alert.

Shrity 61 comments Joined 05/12
13 Jun 2014

Hi Roopalini,
Today you cannot exclude description from Alert e-mail. The feature to exclude description from Alert e-mail is on our roadmap. (VP-25801)

19 Jun 2014

Hi Shrity,


Nice article.  Thanks much!


I hope 50%of basic query tuning will be automatically handled by Stats Manager now.


Could you please let me know

1). What are the drawbacks of enabling DBQLXMLPLAN AND STATSUSAGE LOGGING? 

2). Drawbacks of enabling USECOUNT ?  

Learnt from one of my friend that, we can never do end logging (of USECOUNT once enabled)?

NTA 3532.  

3). Consider a scenario, For example, if in my environment, 

STATS are defined wrongly on almost all the columns (single and multi column).


Single column stats col1, col2, col4, col5, col6, col10.

Multi columns stats are ( col1, col2, col5); (col2, col4, col6, col10); (col2,col4,col5, col6, col10);


1). The doc says, Unused stats will be identified as unused stats.   

2). How about the multi column stats?  Will they be identified as unused?

Does it checks for the entire combination as well?


Any suggestions/recommendations that needs to be taken care before enabling STATSMANAGER are higly appreciated.


Thank you!

Shrity 61 comments Joined 05/12
19 Jun 2014

Thanks Sravan.
I would suggest you look at orange book for recommendations and pros and cons of enabling certain logging. It very well describes the best practice of Auto Stats.
            Orange Book - Automated Statistics Management, 541-0009628
Re: Will Auto Stats identify Multi-Column stats as unused stats..?
The answer is yes....

danishbiya 1 comment Joined 12/13
21 Jul 2014

Hi Shirty,
I am using a stats manager portlet for automation and tried automating the job for one of my schema. Initially when i created a collect job and Scheduled it to run everyday it ran fine. But for lst 3-4 days it was failing with the strange error ,
Unable to synchronize SystemId: 10 | id: 9 with Teradata on 'CollectJob:11'!
Today when i was just debugging for this error, it just clicked in my mind that i had dropped some backup table from this schema and those where included in the stats jobs which did not help the stats job to execute properly. I went ahead and edited the job and removed the tables which i dropped after which i ran the job and it was succesfully(Waiting for the next scheduled run).
Just wanted to know that is there any mechanism to skip these kind of senario in the jobs and let the job run and complete with a warning message that the table which was dropped was not found or something similar?
Any help in this matter is highly appreciated !!!
Thanks & Best Regards,


Sandeepyadav 27 comments Joined 09/13
11 Oct 2014

Hi ,
I am using stats manager for autostats. I try with three databases scope for this at first time.When i ran the analyze job i got the error :

Unable to synchronize SystemId: 2 | id: 4 with Teradata on 'AnalyzeJob:5'!
Analyze job getting data from PDCRDATA. i try to run it but everytime i am getting this error.
can anybody  help on this.
Thanks - Sandeep.

Thanks, -Sandeep.

Greg.N 2 comments Joined 11/14
19 Nov 2014

Hi Shrity,
We're trying to disable the thresholds for a number of objects.
Is there a way to script this?
Vpt seems to only allow 1 object at a time.
Enjoyed your article!

rksurya1 1 comment Joined 12/14
08 Dec 2014

Can we have Teradata Managed Servers running on OS SLES 11 and database nodes running on OS SLES 10? Is there a compatibility matrix available?

Shrity 61 comments Joined 05/12
08 Dec 2014

That's a good suggestion. Please open a help ticket for the same.

Shrity 61 comments Joined 05/12
08 Dec 2014

There is no script available to disable thresholds.. :-(

Shrity 61 comments Joined 05/12
08 Dec 2014

Viewpoint TMS doesn't support SLES 11 yet. You can work with your CSR to get compatibility matrix.

teradatauser2 29 comments Joined 04/12
08 Jan 2015

Hi Shrity,
I am currently setting up viewpoint on our new 14 system. I have one question regarding the table space. Typically what should be the value of perm space and % skew for dbc.transientjournal that i can put there. Is there any way to reach at this figure for my system ?

Shrity 61 comments Joined 05/12
08 Jan 2015

Hi Samir,
I would suggest to talk to your Teradata field (PS/CS) personal.
 Thanks, Shrity

17 Feb 2015

Hello Shrity,
Hope you are doing good.
My Question is regarding COLLECT JOB in Stats Manager.
Under Collect Job, I have the list of databases(tables) and have scheduled to run them on every weekend.
Instead of scheduling it every weekend, How to schedule my COLLECT JOB based on table change/growth?  Where can I find that option in Collect Job?
[Whereas in ANALYZE JOB, we have some options in FUNCTIONS TAB (no. of days of data to analyze , approval before collect etc etc).  But, we dont have any such tab/options for collect jobs].

Thank you,

teradatauser2 29 comments Joined 04/12
06 Jun 2015

Hi Shrity,
I have a requirement that a certain set of users would be running some queries using an application id on production box. Those queries would be analytical in nature and might be bad. I want to set up an alert at session level with the requirement that, If a query from that application id takes more than say 50K cpu secs, i need to abort them and send an email alert.
My approach is :
I can create an alert in alert type- session. I can "include" this application id in "include or exclude users" list. This makes sure that this alert applies only to this id. I can choose 50K as AMP CPU and choose "abort session" to abort such qeueries and send them an alert. I believe, by selecting this application id in "include" section of "include or exclude users" list, this rule will apply ONLY to this id and not to the rest of the Ids on the platform. I have read the user guide and tested this (only sending alert message and not abort), but i wanted to be sure before enabling as this is a prod env.
Please let me know if my understanding is correct.
Thasnks , Samir

teradatauser2 29 comments Joined 04/12
08 Jun 2015

Hi Shrity,
Did you get a chance to look into this.
Thanks !

Shrity 61 comments Joined 05/12
08 Jun 2015

In "Include or exclude users" option you can choose "users" and not "application id". If there is one to one mapping between users and application ids you can use that option and if you include a user the alert will only apply when an event occurs for that particular user.

teradatauser2 29 comments Joined 04/12
08 Jun 2015

Hi Shrity,

Thanks for the reply. When I check the alert setup, i can see application ids listed in the include option.I can see all kind of application id - BO, ETL etc. Users schedule jobs using these ids. Is this the one to one mapping that you talked about ? I am planning to have a distribution list set up for the users who will use this service id and create an action set for that distribution list and abort and notify them in the alert set up. Do you think this wont work or am i missing anything here ?

Thanks !


Shrity 61 comments Joined 05/12
08 Jun 2015

Application id is different than Teradata user/username. In "Include and exclude user" option you can include and exclude Teradata users. If in your shop you have one user per application then you are good to go.

MarkJBrayshaw 4 comments Joined 12/12
13 Sep 2015

Hi Shrity 
           I'm running a very long CHECKTABLE from the Viewpoint remote console - can you tell me where I can find the output from this (other than in the portlet), please?

Shrity 61 comments Joined 05/12
14 Sep 2015

the output is not saved anywhere other than the Viewpoint portlet screen.

MarkJBrayshaw 4 comments Joined 12/12
14 Sep 2015

Thank you Shrity

ayesh 1 comment Joined 01/13
29 Oct 2015

Hi Shrity,
we are partially using stats manager to run collect jobs and rest of the stats we run as part of ETL flow, i want to use stats manager for all type of stats that are refreshed / collected in system, problem i see with stats manager is that it doesn't allow us to add our own stats which are already defined on tables as collect job will only run stats which are analyzed by stats manager on its own, is there a work around for this? can i add stats on my own in collect jobs through stats manager so they can be scheduled and refreshed? i was unable to find any such feature and is there a plan to add this feature in future?

You must sign in to leave a comment.