0 - 24 of 24 tags for join

Normally when I build a query with LEFT JOINs I have a previously mentioned table on the left side of the equal the table in the LEFT JOIN on the right side of it. In analyzing queries that a 3rd party app is executing I see something like the following:
SELECT *
FROM S1.TABLE_A ID
LEFT JOIN S1.TABLE_B B on B.ID=A.ID 
 

I am experiencing efficiency issues when using multiple left out joins in queries.  I can resolve this by breaking up the existing query into smaller queries but I would like to know technical reasons why Left Outer Joins would experience efficiency issues.

My data looks something like this:
table1:
ID | Line | Code
1 | 1 | a
1 | 2 | b
2 | 1 | d
2 | 2 | e
2 | 3 | f
 
table2:
Code | Name
a | alpha
b | beta
d | delta
e | epsilon
f | phi
 
A common join brings the following results:
ID | Line | Name

Hello,
 
I have a difficult task (or at least this is how I see it) to implement using SQL. Here is the case:
- there are two tables: A and E each one having a numeric column. Each of the two tables contain checkpoints(numbers) for a specific region.

Hi 
I am trying to implement a view on tow tables 
but the problem is I am not getting the result set after creating the view.
I have a SOURCE tables 
First source table T1
Second Source table T2.
 
my Requirement is  I am joining T1 with T2 on date, t2nbr,t3nbr,t4nbr,t5nbr.

I am trying to perform a sum based on multiple values in a table.
 
The current year week is formated like this 201401 for 2014 year and week 1.

Hi!. I have the following query:

create multiset volatile table newTable, no log as (
SELECT so.*, pu.*
FROM db.table1 so
left outer join db.table2 pu on so.id = pu.id
) with data primary index(id) on commit preserve rows;

Which throws the following error when run:

Consider two tables ,one has COLUMN_A defined as a VARCHAR(100) and the other has COLUMN_A defined as an INTEGER. When joining these two tables via COLUMN_A what does the optimizer have to do in order to accomplish the join?Convert VARCHAR to INTEGER  or convert INTEGER to VARCHAR? thanks.

Hello,
I have an issue while trying to get only the record containing max(c.event_dtm) from the following query:
SEL     a.startclock_day,
           a.shacct_no,
           a.awb_no,
           b.ship_ref,
           b.lc_event_cd,
           c.event_dtm,
           c.gmt_event_dtm,

SELECT
col1,col2
FROM
staging_table STG
INNER JOIN
target_table TGT
ON
STG.SRC_TRAN_ID=TGT.SRC_TRAN_ID
WHERE
TGT.ROW_STAT_CD='Active'
AND TGT.SRC_SYS='sourcename'
In above query, both STG and TGT has PI as SRC_TRAN_ID
TGT has stats collected on ROW_STAT_CD and SRC_SYS

Hi all. We use PDCR to archive our DBQL tables - the primary index on the tables is (logdate,procid,queryid) and they are partitioned by logdate.
 

UPDATE db.source

 

SET

 

region= CASE WHEN b.title_rw LIKE '%East%'  THEN 'East'

WHEN b.title_rw LIKE '%Central%'  THEN 'Central'

WHEN b.title_rw LIKE '%West%'  THEN 'West'

else 'Region N/A' END 

 

from db.source a

Hi,
 

Overview of Unity Source Link

Unity Source Link (USL) allows reads and joins from an external or foreign database to Teradata Database without requiring data replication in the Teradata data warehouse. USL can be used to:

Table A has the value 'abc123' Table B has the value 'abc123'
I confirm this by

SELECT * FROM tableA a WHERE value='abc123'
and
SELECT * FROM tableB b WHERE value='abc123'

and both queries return results
however when i do:

Hi All,
    I have read that "For join performance, a NUPI can be a better choice than a UPI."
Someone please explain why is that? If it is a PI join then it is an same AMP operation how does NUPI will serve better than a UPI??

Hi,

Need help on the following scenario.

Consider the following dataset

transaction created_date closed_date

1                1-jan-2011       5-jan-2011

2                2-jan-2011       10-jan-2011

3                5-jan-2011       5-feb-2011

4                10-jan-2011     6-mar-2011

Hello everyone,

  I have SQL question, about how does having a filter condition like (say) prod_cat = 7980 in the following two conditions will be treated

1. In a where clause of multiple joins

    something like a inner join b

                               on [join cond]

I am relatively new to using geo functions but I want to make sure my places dimension table does not have any duplicates by doing a delta operation from the freshly geocoded records in my staging table.  First, I attempted to do a left outer join on the point_geo column:
    
 

Is there any other way of reliably tell the optimizer to use the PPI other than hard-coding the minimal/maximal value (date) in a following case?

Here is a DDL for a big table with customer transactions:

transaction_table
PRIMARY INDEX NUPI_txn ( transaction_id )
PARTITION BY RANGE_N(transaction_date BETWEEN DATE '2008-01-01' AND DATE '2012-12-31' EACH INTERVAL '1' DAY , NO RANGE OR UNKNOWN)
INDEX NUSI_customer_id( customer_id)

I have TABLE A and TABLE B

I'm trying to select reg_no.... inner join ON A.id=B.id1,Id2,Id3......Id20. And i have 3 conditions in Where clause.
(A.Id could be equal to any of 20 colums, some times 1 or 4 or 5 of 20 columns too)

I'm writing as
ON (A.Id=B.Id1
OR A.Id=B.Id2
.............
OR A.Id=B.Id20)

I have 7 millions of rows, but after where clause, rows are not more than few thousands can any one plz suggest me whether i'm doing right or any other simple way for my problem.

sample query:

SELECT reg_no
FROM A
INNER JOIN B
ON (A.Id=B.Id1
OR A.Id=B.Id2

Hi,

Does anyone know of a better way to do a table join on a string ignoring the case other than doing something like:

join on lower(a.str_col) = lower(b.str_col)

Thanks,
Carl

I'm relatively new to database development in general and I've got a question related to the use of CASE statements when joining tables. Here's the situation I'm faced with.

I have a table (T1) that looks like this:
key1 key2 key3 key4
aaa zz 1 2
aaa zz 2 3
bbb yy 2 3
ccc xx 3 4

And another table (T2) that looks like this:
key1 key2 key3 key4
aaa zz 1 2
bbb yy ^ ^
ccc xx 3 ^

As you can see the keys in T2 are not always the same as T1 (don't ask why). But I need to use as many of the keys as I can in the join. What I coded is this:
ON
T1.key1 = T2.key1

Hi ALL,

I have a simple question. Is it important the order of joind tables? What's better?:

select ...

from large_Table lt

inner join medium_table mt
on...

inner join small_table st
on...

or?:

select ...

from small_Table st

inner join medium_table mt
on...