All Forums Database
08 Jun 2011
Supporting Multiple Environments on One Server

I am sure this one has come up before, but my searching does not find any comments...
Rather than buying multiple servers, it has been recommended that we implement multiple environments on one server (for example, Dev, Test and QA on one server) separating them by creating multiple databases with suffixes to identify the environment. While this could work, the risk and effort seems high - all view DDL for example needs to change from one environment to another - maybe using parameters. ETL has to be handled the same way. Assigning default databases is not enough as a view or query could easily pull from more than one.
I can see technical solutions (work arounds) but none seem great.
I hear different things from colleagues - most say never do this while a couple say it is done all the time.
Any comments? Any solid solutions that people have come up with that satisfy strict migration governance rules while not adding too much complexity to the ETL and DDL?

Adeel Chaudhry 773 posts Joined 04/08
13 Jun 2011


Well, talk about ideal scenario .... yes every environment should be seperate .... starting from IP address, network cables, server etc etc etc!! But what if pocket doesn't allow that? :)

So yes .... it can be a tradeoff as well .... and most of the time, not always though, it happens to be a tradeoff (as far as my experience is concerned)!

Possible issues can be:

- Data visibility between/across environments
- Releasing code to seperate environments
- Tracking data quality and issues etc.

For a solution consider we have a fresh clean system:

- Create 1 user named e.g. "Project1"
- Under that create DBs/Users (my preference would be users) e.g. "Project1_DEV", "Project1_UT", "Project1_SIT", "Project1_PRD"
- And put whatever structure you need to have under those DBs keeping consistent in all three
- Create 3 separate, independent roles .... for each environment
- Use roles to assign respective developers, testers and production users
- For code moving e.g. from development to unit testing area, you only have to replace "_DEV" to "_UT" and nothing else - Relates to release management process and it will be a good idea to have some trained resource for this
- Implement code versioning (will be working hand-in-hand with release management) for better management


Regards, MAC

-- If you are stuck at something .... consider it an opportunity to think anew.

14 Jun 2011

Adding a suffix like _DEV, _UT... is one of the workarounds for sure. It means that the DDL has to change between environments for views - unless you can manage it al through defaults, which is not always possible. This could be scripted, even parameterized, but generally, DBAs do not like it. More critical. ETL needs to parameterized and so does the BI layer. Basically, the DDL is really the easiest part. More consuming applications, the more 'effort' and more risk.

mnylin 118 posts Joined 12/09
14 Jun 2011

You're right that DDL is probably the easiest part. ETL isn't too far behind that, though. If you can keep the suffix (I prefer prefixes so they sort together by environment) in the database name, it's very easy to parameterize most enterprise class ETL tools. For Teradata Utilities, you can create a wrapper script in whichever shell you use (Windows Batch, ksh, bash, etc.). This could read in a parameter file that lists what the database names are that correspond to each layer of your current environment.

A similar approach can be used for DDL implementations. Although you're right in that most DBAs don't like to see their scripts change before implementation. This sort of gets you around that in that the script can remain static (even read only) and the parameters can be replaced in memory before the script gets executed. If you're up on your shell programming, that is.

For most 3rd ETL tools like DataStage, Ab Initio, etc., you have the option of specifying a variable in your jobs that will be replaced at run time. This can again be your database name that corresponds to the layer of the environment you're manipulating.

If you properly set up profiles, roles, and accounts, you can easily manage user access across the environments.

In my experience with this kind of thing, the real issue comes in with the BI layer. I'm not aware of a widely used tool that allows you to parameterize your BI queries to different databases. An option for addressing this is to make use of the default database. Either through a shared user ID that connects to Teradata or through some kind of pre-SQL statement that's executed before each query. BusinessObjects has a BEGIN_SQL parameter, for instance, that is executed before each statement from the tool. Or, if you'd prefer, establish a DEV, UT, PRD, etc user ID for BI users that has a default database assigned as the appropriate semantic layer.

I wish there was a more glamorous or easy solution, but I haven't come across it before. If you find one, let me know.

You must sign in to leave a comment.