Teradata is very pleased to announce the official release of Teradata Viewpoint 13.03 effective March 12th, 2010.

The main theme of this Viewpoint release is Teradata Manager feature equivelence but there's much more!


So here's a summary of this exciting and maybe surprisingly feature rich release. First, understand that the Teradata Manager core functionality including the PMON application has been migrating to Viewpoint and the Teradata Management Portlets since the initial release back in August 2008. This release however signifies feature equivalence including the most significant addition being Teradata Alerting. This new shared message bus alerts architecture addresses Teradata Manager alerting and actions. The 13.03 release also includes many feature additions to the Teradata Management Portlets, including new portlets, major advancements to existing ones, and new metrics. The Viewpoint Administration menus have new options for alert setup, system alert configuration, and have been reorganized for ease of use. Lastly, there is a very cool and valuable pie chart addition to the TASM Workload Monitor portlet as well. As with any Viewpoint release, this represents a patch update for the Viewpoint foundation software, Teradata Management, TASM, and Self Service Portlets. See sections below for more in-depth understanding of all the new features and functions of this strategic release.

Teradata Alerting

The Viewpoint 13.03 release includes the new Teradata Alerting based on a shared message bus architecture. There are three visible parts to the alerting in Viewpoint to be aware of. The first is Alert Setup in the Viewpoint Admin menus. This menu allows configuration of alert services supporting alert actions. These alert actions include logging, email notification, SNMP traps, or running BTEQ scripts or custom programs. Here's a view of this including a look at the reorganized Admin menu.

The second view into alerting is also located in the Admin menus under the "Teradata Systems" menu option and through the Alerts choice. This is where one would configure the alert trigger and resulting actions (for example, sending the DBA an email when a CPU utilization threshold is exceeded). In this same menu is an option to migrate existing Teradata Manager alerts into the new architecture. Note that one should migrate existing alerts before creating new alerts otherwise the migration feature is disabled. See the Teradata Viewpoint 13.03 Configuration Guide for more information.

The third alert visualization in Viewpoint is through the new Alert Viewer portlet. This portlet allows viewing of generated alerts over different periods of time. Here's what this new portlet looks like:

New Teradata Management Portlets

Besides Alert Viewer, there are also two other new portlets offered in Teradata Management Portlets. The first of these is part Teradata Database monitoring and part Teradata platform management. This new portlet, called Node Resources, allows one to monitor both Teradata physical and virtual resources providing informative metrics on:

  • Percentage of CPU used by nodes or vprocs
  • How system resource usage is spread across the vprocs
  • How much physical disk I/O, BYNET traffic, or host reads and writes are occurring
  • Whether congestion or excessive swapping is an issue on a single or group of nodes or vprocs

Here are a couple of sample views of this new portlet although there is much more it can do than is displayed here.

The second new portlet is Metrics Analysis. This new portlet leverages the multi-graph technology introduced with Viewpoint Monitoring against Teradata resource metrics combined with the power of rewind. Metrics Analysis allows multiple Teradata resource usage metrics from multiple Teradata systems to be displayed in one view over different durations. This portlet will be extremely helpful for easy cross system comparison or for multiple single system metrics analysis. Additional metrics are added to the view through Preferences. Here's a view of active sessions and CPU usage trends across two Teradata systems for the past week. Notice that the y-axis is not populated as there are multiple metrics displayed that have different measurements.

However if one only has like metrics displayed or drills down on a particular metric, the values will appear on the y-axis representative of the metric averages being displayed. The other feature when drilling down is the display of the performance envelope showing a display of the maximum and minimum values participating in the average. Here is a drill down on CPU for the biggulp system.

New Teradata Management Features

Accompanying the new portlets is a host of new feature additions to the Teradata Management Portlets. There is a new data grid portlet architecture that is now employed in table display type portlets (i.e. MyQueries, Space Usage, Lock Viewer, Query Monitor, etc) allowing for improved paging, filtering, sorting, column selection, and highlighting functions. See the feature overview of the Query Monitor portlet below to get a glimpse of these extensive enhancements. A number of new Teradata metrics were added to the System and Performance metric groups for display in the trending portlets, Capacity Heatmap and Metrics Graph. The new system metrics include CPU/Disk Ratio, FSG Cache Miss, Index Ratio, Logical MB/Second, Net A Usage (Bynet), and Wait I/O CPU while the new performance metrics are Blocking Duration, Concurrency, Retrieve Time, and Total Time.

In particular, there were significant changes to the Viewpoint query portlets. First and foremost is that all prior functionality for both Filtered Queries and Query Monitor was integrated into a new superset Query Monitor release. Customers with existing instances of Filtered Queries or Query Monitor on their dashboards will be automatically converted to instances of the new Query Monitor portlet upon upgrade to the 13.03 release. The new selection menu in Query Monitor allows information display by session (all or set criteria), account string, or user. The criteria are still set within preferences and allows for easy filtering to only show those queries of concern exceeding set thresholds. The configurable options are shown below.

Looking at all sessions, one will notice the integration of Filtered Queries and that there are more session states offered now including idle and different delayed options. One will also see the new data grid architecture including the new column filtering. This allows for column filtering with wildcards ("d*" on username) and other filtering operations like ">3.1" as mentioned in the delta CPU column tool tip.

Another aspect of the data grid is the ability to customize portlet columns by selecting the change display configuration arrow. This allows for selecting/removing columns from a report, moving columns within a report (simple drag and drop), and setting of threshold values that will then highlight outliers in the primary display. All of these options are part of the new data grid architecture.

There are also advancements in query action operations. For instance, being able to take an action on multiple queries in one operation and honoring Teradata credentials for that portlet session. Look at the view below showing selection boxes being displayed when the abort operation is selected in the multiple query view. The top selection box is an immediate select all. So if one wanted to abort a given users' queries, they could filter by user name and then choose the select all for the operation. Data grid paging is shown in this view as well since there are multiple pages of data to display. Notice at the bottom of the portlet where it is stating this is page 1 of 3 with 560 rows to display. Note that the paging works in conjunction with the scroll bar on the right.

But there's more. Within preferences, there is select box for a new "Display Top Sessions Graph option". This then brings a new sessions bar into the Query Monitor view to display the top consumers of various resources. By highlighting over these, one can get more information about these queries as shown in the balloon below. Also the menu of available resources is displayed that top consumers can be reported on.

By a simple click on the top sessions graph for a resource, one gets taken to the detail view of that particular consuming query. Notice the addition of Query Band and other new query information to this detailed view. The CPU Skew is highlighted from the setting made in the columns customization discussion above. This view additionally shows another new navigation feature which allows one to go through query detail views (Previous and Next) one by one without having to traverse back up a level. Obviously the tools bar is still there for operations against a single query specific to its state.

Workload Monitor Addition

A new distribution view has been added to the Workload Monitor portlet that displays workload CPU consumption percentages, allowing one to compare the CPU consumption for workloads in an allocation group (AG) to the relative weight of the AG within the resource partition. In other words, show how the system is being utilized compared to the prioritization schema. This new pie chart is accessible from the main portlet view.

If interested in a certain allocation group, highlight over the relative weight which will then bring forward the workloads utilizing that AG and provide details for the actual consumption.

 For more information on the TASM Workload Monitor portlet, see the Viewpoint 13.02 Release Article.

Teradata Manager Considerations

As mentioned, the Viewpoint Teradata Management Portlets 13.03 release is the feature equivalence release for Teradata Manager and PMON. There are a few items to consider however. First, there were a couple of minor items realized late that still need addressing. These are the utility partition usage and Teradata Manager data dictionary table cleanup settings. Both of these will be addressed in an upcoming patch release to 13.03. The utility information will be a new item choice in the Query Monitor portlet while the data cleanup settings will be added to the Viewpoint Admin menus under the Teradata System grouping.

There are also two features, Teradata Manager Scheduler and Priority Scheduler Administrator (PSA), that will not migrate to Viewpoint. These were based on business decisions as alternative solutions were deemed acceptable. For items leveraging the Teradata Manager Scheduler, they will need to be transitioned to an operating system (OS) based scheduler (for instance Linux cron jobs). As the need for manual configuration of priority scheduler is dissipating, the PSA interface is being discontinued. For instances where manual PS modifications are necessary, one can continue to use the schmon command line option.

The Viewpoint 13.03 release. That is a lot of good stuff!

TD_Onlooker 1 comment Joined 03/10
31 Mar 2010

Hi, Can Viewpoint handle the alert generation using DBCMNGR.alertrequest table? Or we still need TD Manager server for this feature?

gryback 151 comments Joined 12/08
02 Apr 2010

With some help from friends in Development and as documented in the Viewpoint Configuration Guide:

The Viewpoint Alert Request data collector monitors the dbcmngr.AlertRequest table for incoming alert requests. If the table row contains valid data, the contents are forwarded to the Alert Service to process the alert action. The Alert Request data collector also monitors the dbcmngr.MonitorRequest table. Any Teradata Database utility or user program can request Teradata Viewpoint to monitor its progress, by inserting rows into the dbcmngr.MonitorRequest table. Each row includes fields that indicate the date and time by which the next row is inserted. If a new row is not inserted before the specified date and time, the Alert Request collector forwards the contents to the Alert Service to process the alert action.

darrenrankine 3 comments Joined 05/07
01 May 2010

We've recently implemented both TASM and Viewpoint and simply put the results are amazing. We are filtering disruptive workloads, throttling others and aborting some session after they have been skewing for a while. Recently we noticed a number of blocks taking place during one of our ETL loads. Is there a Heatmap for blocking? It would be pretty amazing to see Avg, Max and Min blocks per hour.

gryback 151 comments Joined 12/08
03 May 2010

Awesome, nice to hear the positive comments on Viewpoint and TASM!

Regarding reporting for blocking, have you considered the "Lock Viewer" portlet? This is similar to the replaced Teradata Manager locking logger. I think this might get much of what you are looking for in locking/blocking reports. We could consider additional metrics in the trend reporting portlets but I'm afraid that the data summary will only lead to investigating the additional details in Lock Viewer anyways. Let me know what you think.

darrenrankine 3 comments Joined 05/07
05 May 2010

Thanks for the response. We are using the Lock Viewer and the Filtered Queries. However, you end up rewinding minute by minute to find when blocks are happening. It would be more productive to start with a blocking heatmap, to identify which hours over the month the max blocks are happening and then drill into the lock viewer and filter queries for that time period. Consider this real world example that we found using Viewpoint.
12:41 PM
An ETL job loading a dimension table data wanted to perform a specific action against the table. The job was blocked because a user was running a report that involved the dimension data.
1:33 PM
52 minutes later, the ETL job is still blocked by the one user and now 31 queries are also blocked, because they are waiting on ETL job to complete, which in turn in waiting on the users report to complete. The 31 queries belong to 7 users.
1:38 PM
The report that is blocking ETL is terminated by the DBA’s. The ETL job completes and the 31 blocked queries are freed from the blocking stated and start to run.

Consider this:
If a report of 1 of the 7 users usually takes 5 minutes to execute and the user submitted the report at 12:41 then the report would have taken 58 minutes to complete. This is because the report would have been blocked for the first 53 minutes.
We already know we need to rewrite the ETL job, it’s some pretty old code. However, we’d like to see whether more examples exist like this, as this type of issue really affects the user perception of performance.

gryback 151 comments Joined 12/08
05 May 2010

I'm convinced. I'll create a new enhancement request and reference this article discussion as backing.

You must sign in to leave a comment.