0 - 42 of 42 tags for v_uda

Overview

This article describes how to combine exploratory analytics and operational analytics within the Teradata Unified Data architecture (UDA). The UDA is a logical and physical architecture that adds a data lake platform to complement the Teradata Integrated Data Warehouse. In the Teradata advocated solution, the data lake platform can either be Hadoop or a Teradata Integrated Big Data Platform optimized for storage and processing of big data. Query Grid is an orchestration mechanism that supports seamless integration of multiple types of purpose built analytic engines within the UDA.

Unity Director has a wide set of capabilities to control where users and requests are routed. Unity Director 15.00 now offers so many choices and ways to control things; they can be a little confusing. Let's talk about some of the most common uses for the available routing options (and some cool ones you might not have considered yet).

Getting from here to there – A phased rollout of Unity Director & Loader

Unity Ecosystem Manager 15.10 release is now GCA.  Ecosystem Manager is tightly integrated with Unity Director/Loader, Data Mover, Load utilities and BAR to achieve high availability and disaster prevention/recovery.  It is an integral part of the Unity Portfolio (Unity Director/Loader and Unity Data Mover) providing a holistic view of the entire Unified Data Architecture (UDA).  Ecosystem Manager can be deployed in single and multi-system UDA environment.

 

We are pleased to announce the General Customer Availability (GCA) of Unity Data Mover 15.10 as of June 2015. With release 15.10, Unity Data Mover is now certified with Teradata Database 15.10. Data Mover 15.10 can utilize QueryGrid/foreign server connections to move data from Hadoop to Teradata (if installed) in addition to using the Teradata Connector for Hadoop (TDCH).  Hadoop support has also been extended to support TDH 2.1.7, Hortonworks 2.1.7 on commodity hardware and Cloudera 4.3 on commodity hardware. Aster support is extended to Aster 6.10.

Life often requires comprises. In a data warehouse, there's often a trade-off between providing quick response times to reporting users running complex reports, and loading tables with new up-to-date data. Adding join-indexes to a table can speed up reports, and solve redistribution problems, but do come at a cost to table updates.

One of Unity Director and Loader’s core benefits is the ability to keep multiple Teradata systems synchronized with online, transactional-consistent, changes. Unlike post-transactional replication, Unity Director’s SQL-multicast sends requests in parallel to all connected systems at the same time.

Announcing Teradata Unity Data Mover 15.00

We are pleased to announce the General Customer Availability (GCA) of Unity Data Mover 15.00. With release 15.00, Unity Data Mover is now certified with Teradata Database 15.00, and supports important database features such as JSON and foreign server definitions. Data Mover can now move foreign server definitions, as well as utilize QueryGrid/foreign server connections to move data from Hadoop to Teradata (if installed). Hadoop support has also been extended to support TDH 2.1 and 1.3.2.

Unity Ecosystem Manager is part of the Teradata Unity portfolio of products supporting multi-system Teradata Environments.  While part of the Unity portfolio, it can also be leveraged as an event-driven system for efficient monitoring, alerting  and control of an entire analytic environment (Applications, Servers, Jobs, Tables) for a single Teradata, Aster or Hadoop system. 

Unity Ecosystem Manager (Ecosystem Manager) is part of the Teradata Unity portfolio of products (Unity Director, Unity Loader, Unity Data Mover and Unity Ecosystem Manager).  Ecosystem Manager monitors and controls critical applications in both single and multi-system Teradata environments.  It is an event-driven system that monitors operations, generates alerts, and pushes those mission-critical application alerts to operations staff, DBAs, and other subscribed EDW staff. 

The Analytical Ecosystem can be quite complex. It usually consists of multiple managed servers containing instances of databases, ETL servers, infrastructure services and application servers. Monitoring and managing applications in this environment can be a very challenging task. Knowing at any moment what is the state of your hardware, how your applications are doing, how many jobs have finished successfully, how many failed and why have they failed are the types of questions database administrators typically ask themselves. Now with the addition of Hadoop infrastructure components within an ecosystem, monitoring has become even harder. Unity Ecosystem Manager helps users to answer those questions and perform and necessary maintenance tasks.

Data synchronization can be challenging, but it doesn't have to be... 

Unity Director is an extremely capable product that offers a wide variety of benefits. One of it's unique benefits is the ability to easily route users and requests to specific Teradata systems. There are many potential uses of this ability; but one use, in particular, is to selectively direct users that want access to historical data to specific systems where the data resides. 

Unity Director 14.00 provides user and query routing between multiple Teradata systems. This satisfies two requirements – routing users to a system that has data to satisfy their queries and re-routing users during planned or unplanned system outages. Hence, end users receive the ultimate benefit of continuous, transparent access to data. In addition, Unity Director provides high availability, disaster recovery, and data synchronization for multiple Teradata systems. For more information on Unity Director 14.00 please follow the links provided at the end of this article.

In this article, I will provide details of various routing rules that can be created using Unity Director 14.00. One can choose from various routing rule modes Auto, Preferred and Balanced. I will be explaining how reads and writes can be configured in these modes.

This study will demonstrate how Unity Director 14.00 User Interface handles the initial configuration and setup activities users will face when starting to work with this product.

Unity Director handles data for mission critical applications that demand continuous high availability (HA), data location identification, session management, and data synchronization. It simplifies multiple Teradata Database system management tasks including routing users to the data location, rerouting users during an outage, and synchronizing database changes across Teradata Database systems. Unity Director integrates with Unity Ecosystem Manager to enable administrators to view system and table health, receive warning and critical alerts, and view analytics and statistics about data operations. As a bulk load option, Unity Director can utilize Unity Loader to selectively route large data loads from a client to Teradata systems without additional administrator operations.

In this article, I will be showing you how to establish Unity Director 14.00 routing rules which will enable you to efficiently load balance reports across multiple managed Teradata systems.

Unity Director 14.00 and Unity Loader 14.00 are now available!  Last year Teradata announced a new query management and data synchronization product to the marketplace.  Now we've taken it to the next level.  Unity Director 14.00 provides a new user interface, new and improved routing rules and scalable configuration options.  Unity Loader 14.00 is a new offering extending beyond the basic synchronization of SQL updates and now enabling the intelligent synchronization of high volume/bulk loads. 

Teradata Data Mover can be a valuable product to add into a dual system production environment either as a scheduled / triggered step in your load process or for ad-hoc re-synchronization of tables.

Teradata Unity 13.10, the latest enabling technology of the Teradata Analytical Ecosystem was formally announced at PARTNERS Conference 2011. The product’s focus is to simplify the analytical ecosystem by removing the everyday complexities involved in query management and data synchronization across multiple Teradata systems. Teradata Unity delivers on this strategy by way of product automation.

Teradata Unity serves as an abstraction layer for users and applications making multiple Teradata systems appear as a single Teradata database instance. Unity dynamically manages all the traditional activities around query routing and data synchronization creating a single integration layer for the Teradata ecosystem.

The Teradata Multi-System Manager (TMSM) product monitors and controls hardware components, processes, and data loads.  Ever wonder who is monitoring the monitor? The Internal Monitor should not be confused with the external fail over monitor, the fail over monitor is responsible for monitoring the TMSM Master.

This article would be useful to anyone attempting to better understand the TMSM Internal Monitor.

Teradata is pleased to announce the Teradata Multi-System Manager (TMSM) 14.00 release effective May 23rd, 2012 (sorry, a little tardy on the release article).  This release is focused on customer enhancement requests, many of which were tied to extending the TMSM infrastructure to better suit customer environments and volume usage. The release includes focused performance improvements, portlet updates, enhancements to the API, and expanded browser support.

Data Mover is a Teradata Application that allows users to copy databases or tables between Teradata systems. It is a JEE based application that is composed of three major code components: the Client (Command-line interface or Viewpoint portlet), the Daemon, and the Agent.

Teradata Multi-System Manager is an application for monitoring and administration of single and multi-system environments. TMSM Report Viewer allows TMSM (Teradata Multi-System Manager) users to create and view reports about the Teradata ecosystem using data collected by TMSM. The TMSM data model is in Appendix E of the Configuration Guide.  For more information about TMSM, please read this article.

This article provides information about TMSM reporting capabilities, shows how to create custom reports, and reviews the out-of-the-box reports provided.

Teradata Unity 13.10, the latest enabling technology of the Teradata Analytical Ecosystem was formally announced at PARTNERS Conference 2011.  The product’s focus is to simplify the analytical ecosystem by removing the everyday complexities involved in query management and data synchronization across multiple Teradata systems.  Teradata Unity delivers on

We are pleased to announce the release of Teradata Multi-System Manager (TMSM) release 13.11 on September 28, 2011.    This release allows TMSM to integrate with our newest product in the Dual Systems / Analytical Ecosystem family – Teradata Unity.   TMSM was also enhanced to provide additional table validation options and support for other client platfor

In this Mini Section we are going to extend upon the Simple Quotation Engine from FNP#6 in order to do Quotation Management. This will involve persisting information from the “Get Quote” method including Customer and Property Details as well as the resulting Quotation information. In this first part we will discuss the business process and establish the new database tables we will need. In the second part we will create the necessary Data Access Objects using iBatis and wire the Quotation Manager into the Web service of FNP#8 and FNP#9, and in the third part we will complete the Quotation Management process by providing for the ability to buy a TZA Insurance Policy using the Web service.

In the first two parts of this Mini Section we have been looking at the concept of Embedding analytical processing directly within the Teradata Database. Part one used a SQL Stored Procedure to act as an Isolation layer between the Enterprise Application and the Teradata Database. Part two replicated this isolation concept using the Java External Stored Procedure (JXSP) approach. Part three combined the JXSP with the previously prepared, Spring Framework based, Business Processes. This final Part illustrates how to little code is now required in the Presentation Veneer (Web service process) due to the embedding of the Business Logic within the Teradata Database.

In the first two parts of this Mini Section we have been looking at the concept of Embedding analytical processing directly within the Teradata Database. In the first part we used a SQL Stored Procedure to act as an Isolation layer between the Enterprise Application and the Teradata Database. In the second part we progressed along the same line of providing an Isolation layer but this time we employed the Java External Stored Procedure (JXSP) approach. In this third part we combine the JXSP, as an Isolation layer in order to cross the great divide between the enterprise and the database, with the TZA-InsuranceProcess business processes developed previously.

Last time we introduced the SQL Stored Procedure as a means to provide for Embedded Analytics.

However, as of Teradata 12.0 it is possible to use the Java Language as the basis for External Stored Procedures (known as JXSP’s), so this week we will develop a Java based version of the ApplyRiskFactorsToQuote Stored Procedure.

TD12+

Last time we introduced the Macro and the Stored Procedure as a means to provide for Isolation between the SQL call and the underlying database structure.

This week we are going to keep on the core Teradata trail by looking into Stored Procedures as means to provide for Embedded Processing.

Last time we did a full on What, Why and How description of some pretty Teradata specific information around Query Banding and showed how we could weave this into the Web Application and Data Access Layers so as to minimize the impact of this on the Business Service layer and it’s developers.

This week we are going to keep on the core Teradata trail by looking into Macros and Stored Procedures as means to provide for Isolation and Embedded Processing.

Last week we used the Eclipse WTP Web service Wizard to create the TZA Property Insurance Web service. We automatically created a series of plumbing code (within the com.teradata.tza.insurance.service and com.teradata.schemas.tza.insurance.service packages) plus a single Implementation (PropertyInsuranceSoapBindingImpl.java) class that the author is expected to customize. This class provides an SOA Presentation Veneer (User Interface) that can act as an entry point into the TZA-InsuranceService Business Process.

This week we will demonstrate how to "Wire" up the different parts of the project in order to connect the Web service Presentation Veneer into the Business Process, Business Objects and Repository layer and therefore on through to the database and the data tables.

So as we discussed previously TZA-Insurance operates as an Insurance Underwriter, allowing other Insurance Brokers to offer insurance services (insurance quotes and policies) to the final customer. In order to expose the business processes of TZA-Insurance we provide a Web service definition to its various Clients (Insurance Brokers, Web sites, etc) that allows them to create a Web service client interface within their application environment (Java or .Net). This Client interface then operates against the TZA Property Insurance Web service in order to initially get an insurance Quotation based upon the characteristics of the Property to be insured and then if acceptable to the Customer buy an Insurance Policy based around that Quotation.

In this session of the Friday Night Project we are finally going to create a Presentation Veneer that will use the Simple Quotation Engine business process we created last week.

However, don’t go getting all excited about MVC Web Pages, Portlets or Web services as we are going back to basics for this one with a plain old Command Line or Console Application interface. Think of it like “Basic Training” Toto, where we establish a usage pattern that we can apply across all further Presentation Veneers.

TZA-InsuranceProcess is a set of APIs which provide a Suite of Business Processes that represent TZA-Insurance, ranging from a simple Quotation Engine for the demonstration of the architectural concepts, through Complex Insurance and Quotation Management for Web and Web service applications to Insurance Summaries that can be represented in a Web Portal.

These APIs are written in Java and provide a series of Business Processes that behave independently from each other or which can be orchestrated to provide a larger Business Service or Process. These APIs are embedded into a single jar file [known as TZA-InsuranceProcess.jar] which relies upon other Teradata implicit objects [as jar files] which are included in its classpath, such as tdcommons-context.jar and tdcommons-access.jar.

Finally it’s time to start some real development within the Friday Night Project. This week we are going to create the TZA-Database project within which we will ultimately collect all of the project information (SQL, Data and build files) necessary to create and maintain a consistent version of the TZA-Database. We will also establish the base infrastructure for an ANT based build that can clear any existing database elements prior to creating the ZipCodeRiskFactors table and loading the base Risk Factors data.

Having previously introduced the “What”, “Why” and “How” of Active Integration and Solid Architecture, this week we start to build up the development environment (Eclipse with the Teradata Plug-In), Teradata Database Schema (TZA_DB) and User (TZA_USER) required to support TZA-Insurance.

After a couple of Weeks of Introduction to the “What” and “Why” of Active Integration plus a start at the “How” around creating a Solid Architecture, this episode of the Friday Night Project introduces the Teradata Sample Application (TZA-Insurance).

Last Friday we defined the “What” and “Why” of our new Active Integration world.

So why don’t we start getting straight into the “How” and pick out some User Interface for each of our application classes?

Not so fast, let’s get some Solid Architecture principles in place before we start chasing up that pretty road Toto.

Welcome to the first article in The Friday Night Project series, where we'll starting with a background on the nature and role of the Enterprise Data Warehouse.

The aim of this, occasionally updated, set of articles is to assist a wide range of users to gain practical experience with developing “Active” Web, Web service and Portlet applications that are targeted at and make best use of Teradata.

The audience for this set of articles is expected to be very wide and will range from Teradata associates within the R&D and Professional Services organization through Teradata Customer and Partner developers (all of whom wish to learn and employ the advocated approaches to Teradata Application Development) to the potential “Next Generation” of Teradata oriented Developers that are currently in their final years of College/University and thinking about how to implement their Degree or Masters projects.