Subscribe to Terdata Developer Exchange - RSS feed for all blogs Latest blog posts
05 Aug 2010

The Wall Street Journal is running a series, “What They Know,” this week on Web privacy.

What caught my eye was today’s article on Web analytics, “On the Web's Cutting Edge, Anonymity in Name Only.”  A quote from the article, “firms like [x+1] tap into vast databases of people's online behavior—mainly gathered surreptitiously by tracking technologies that have become ubiquitous on websites across the Internet. They don't have people's names, but cross-reference that data with records of home ownership, family income, marital status and favorite restaurants, among other things. Then, using statistical analysis, they start to make assumptions about the proclivities of individual Web surfers.”

29 Jul 2010

This is just a quick note to put some information out here to see if it helps you to more easily create charts from the Resusage data.

Much of the information you should be tracking comes from DBC.ResUsageSPMA, and that is what this process is going to analyze. Note that this process is for TD12 only.

04 Jun 2010

We've been flying along recently on Developer Exchange. Where to begin?

Let's start with new software. SQL Edition Java Edition 13.01 was released in mid-May, with new features such as FastExport and FastLoad of table data, advanced authentication, and updates for Eclipse 3.5.2. Read the release announcement for more. The Teradata Plug-in for Eclipse 13.02 was released simultaneously. We also saw updates to Viewpoint PDK 13.02Teradata .NET Data Provider 13.01.00.02, Elastic Marts Builder 1.0, and Teradata Geospatial Release 1.5.

27 May 2010

Welcome back to the series of blogs on cool Viewpoint features. Hopefully by now, you've heard about the Teradata "time travel" feature called rewind. Rewind allows one to easily and seamlessly view portlet data and interactions by going back in time for analysis, comparison, or just general reporting. Rewind, dare I say, is a paradigm shift in systems analysis and management. If interested in learning more specifically about Rewind, start with the Viewpoint Rewind screencast.

24 May 2010

I’ve mentioned it before, Marcio has blogged about it, customers have brought it up at the Partners Conferences. It’s cheap, fast, risk-free, with immediate results. But some of you are still not getting it. Or it could be you’re one of the few who truly don’t need it.

Either way, I’m going to take this opportunity to pound away a little more on the advantages of collecting statistics on your dictionary tables.

24 May 2010

Following up on the outside influences that will drive EDW growth, we have a couple of articles on the exponential (Hmmm… remember this adjective from the late 90’s during the tech boom?) growth of internet connections.

17 May 2010

Time is one of the most powerful dimensions a data warehouse can support. Unfortunately it’s also one of the most problematic. Unlike OLTP environments that focus only on the most current versions of reference data, Data Warehouse environments are often required to present data not only as it currently exists, but also as it previously existed. Implemented correctly, a data warehouse can support several temporal orientations, the three most common being “current,” “point-in-time,” and “periodic.” Implemented incorrectly, you will create a solution that will be impossible to maintain or support.

14 May 2010

I still remember when PPI was first introduced in V2R5 and all the questions that came up in my mind in terms of utilization and performance impact. Long time has passed and most of the Teradata systems nowadays have at least one large PPI table defined. Also most of the questions in terms of performance impact are already gone especially with the use of "partition elimination" and "dynamic partition elimination".  Now in TD12, a multilevel PPI was introduced and again the same kind of questions came up in my mind in terms of utilization and performance impact, as listed below.

29 Apr 2010

Collecting full statistics involves scanning the base table and performing a sort to compute the number of occurrences for each distinct value. For some users, the time and resources required to adequately collect statistics and keep them up-to-date is a challenge, particularly on large tables. Collecting statistics on a sample of the data reduces the resources required and the time to perform statistics collection. This blog gives you some tips on using sampled stats.

Pages