Wednesday, March 7, 2012

Database Design- Referencing multiple database

Hi All,
I am designing database where few of the master tables will reside in different database or in case different server. Scenario is
Server "A" with Database "A" may host the "Accounts" table.
Server "B" with Database "B" may host the "Product" table.
I am designing database "Project" which will hosted in Server "A".
My application requires this master tables [readonly access] as data inserted in my application refers this tables. Also there are reports to be generated which refer this tables.
How do i design my database and sql queries?
I am thinking of approach of having equivalent tables created in my database and writing service which keep tables in my database in sync. This will ensure good perfomance during transaction and reports as they will need to refer this table locally as opposed to different database or different server.

Any thoughts on above approach?? or any better/standard way for such scenarios ?

Thanks in Advance. Your inputs will be of great help.why multiple databases? It makes it harder to enforce referential integrity and it can cause other headaches. i just recently ran into an issue where i swapped out a database instead of doinng a restore and the application broke because database objects that reference another database actually store the database id instead of the name. it was easy to fix but I still had an hour of down time. thank god it was not a production server.|||Hi All,
Hi.
Hi All,
I am designing database where few of the master tables will reside in different database or in case different server.
Really...
Scenario is
Server "A" with Database "A" may host the "Accounts" table.
Server "B" with Database "B" may host the "Product" table.
I am designing database "Project" which will hosted in Server "A".
How do i design my database and sql queries?
In a word: "poorly".
I am thinking of approach of having equivalent tables created in my database and writing service which keep tables in my database in sync. This will ensure good perfomance during transaction and reports as they will need to refer this table locally as opposed to different database or different server.
Well, that plan certainly is......"creative"...
Any thoughts on above approach?? or any better/standard way for such scenarios ?
I'd say, practically anything else would be better.
Thanks in Advance. Your inputs will be of great help.
No problem.|||Ahh, the much-vaunted (but almost certainly mythical and definitely undocumented) 'unreplication' functionality of SQL Server! ;)

On a more serious note, if you really want to proceed down this route you will need to set up Linked Servers and enable the MSDTC service.

I would implore you to think again.

Lempster|||That's certainly a novel approach - doomed but novel. Out of curiosity what advantages were you expecting from spreading your system across different servers? and what was your database(s) supposed to hold?|||How versatile

but, um, how much experience do you have?

Oh, I get it, you're in upper manglement|||Ouch Brett...that'll likely leave a scar. ;)

I assumed from the description that between the lines there is a plain motivation. Seems to me that the distant (aka remote) databases are already in existance, and the O.P. has been given a task to pull some data in from other servers, and followed the logical path to the current planned implementation!

Sheesh. How silly is it to have databases spread all 'round when you can just copy them all to one place and just keep them in synch. ?|||Hi All,
I am designing database

I didn't read that all

I designing a housebuilding business

I plan to have all of my nails located in DC, all of my wood in Oregon, all the fixtures in NY, and all of my roofing supplies in N'orleans (There all over the place and free)|||sometimes i go to other sql boards after following a link and i am always surprised at the difference in the content from this place.|||how so?

too short|||i think this board has more of a tendency to slam the OP.|||Nothing like the real world, eh|||No soup for you, poster!|||There are scenarios where distributed databases are a good idea - but they don't come up very often (at least I've never met one). Must admit it would be quite interesting to work on one though. I think it's important to hear the reasoning behind his system and find out why he thinks it's so versatile.|||I agree there, Mike. Sometimes you need to know the rationale behind the current design before dashing said design upon the jagged and bloody rocks of despair. ;)

If nothing else, it also allows one to more pointedly help (and/or taunt) the original poster's assumptions and reasoning.

As I am sure you are aware, sometimes stuff just comes across initially so far off the wall it's hard to resist a good-natured jab in the ribs and perhaps a well-intentioned kick in the groin.|||Hi all,
Thanks for all kinds of replies.
Well first let me clear (which i shud have done in my first post.) , that database are hosted on diff machine coz they are mastered by diff groups of ppl or the department. Application which we will be designing depends on these data. We cannot certainly have all required data in one single database.
Option of creating creating identical tables and syncing regularly is as i feel the better than writing down the distributed queries.

Any thought on it ??
Thanks,|||Hi all,
Thanks for all kinds of replies.
Well first let me clear (which i shud have done in my first post.) , that database are hosted on diff machine coz they are mastered by diff groups of ppl or the department. Application which we will be designing depends on these data. We cannot certainly have all required data in one single database.
Option of creating creating identical tables and syncing regularly is as i feel the better than writing down the distributed queries.

Any thought on it ??
Thanks,|||What's wrong with just downloading the data from the other data sources via feeds - bcp each file into a transfer table - check the data - put the changed data into your database. How many feeds could you expect this way, what size (roughly) for each feed?

Your method may save you having to write any feeds but the people writing the app are going to hate you for the rest of their lives :)

Mike|||Well first let me clear (which i shud have done in my first post.) , that database are hosted on diff machine coz they are mastered by diff groups of ppl or the department.This sounds like a small data warehousing initiative to me. Perhaps that is the approach you should be taking.|||We do stuff like this on a daily basis. Consolidating data from different departments on a single machine for historical and modeling purposes. I use a combination of distributed queries and web services that allow us to yank data in flat file format from some machines, and yank data directly from others (via linked servers).

Seems like downloading and synching databases is alot more work than you need to be doing to do what (i understand to be...) you need to do. Data feeds and distributed queries are much better with respect to basic database design conventions, and synching databases just causes a spontaneous sphincter constriction whenever I hear it.

Duplication of data = bad. Direct/indirect (and on-demand) access to remote data = good.

Sounds like you've already decided though, so g'luck to you :)|||Well the things getting more complex. We will be having four external database to depends on for our application.
Reason of staying away from distributed query was that our application requires this data (external database table serves as master table for our application) in the transactions. And of course in reports as well.

I liked the suggestion of getting feed from external database into temporary staging area and then updating only updating the changesets in our tables.|||Originally posted by versatile_me
I liked the suggestion of getting feed from external database into temporary staging area and then updating only updating the changesets in our tables.I'd like to claim first dibs on the concept but I suspect it's been around for 40 years.

I know it's fun comming up with new systems but after you spend 6 months sweating blood trying to get it to work there are only a few possible outcomes :
it fails and you have to call in a professional who is then faced with working with a flawed design or loosing the client by throwing everything away and starting afresh.
it works, but only just, and after 3 months limping along the system gets abandonded.
it works and is a fantastic success - I'm afraid this is the least likely outcome :)
If the application is at all important to your company then it may well be worth getting an experienced profesional in to do the work and you provide the business requirements. This would be a good way to learn about database/system design and also might ensure any good ideas you may have will appear in the finished product.

Mike|||We do stuff like this on a daily basis. Consolidating data from different departments on a single machine for historical and modeling purposes. I use a combination of distributed queries and web services that allow us to yank data in flat file format from some machines, and yank data directly from others (via linked servers).

Seems like downloading and synching databases is alot more work than you need to be doing to do what (i understand to be...) you need to do. Data feeds and distributed queries are much better with respect to basic database design conventions, and synching databases just causes a spontaneous sphincter constriction whenever I hear it.

Duplication of data = bad. Direct/indirect (and on-demand) access to remote data = good.

Sounds like you've already decided though, so g'luck to you :)

...
I am thinking of approach of having equivalent tables created in my database and writing service which keep tables in my database in sync. This will ensure good perfomance during transaction and reports as they will need to refer this table locally as opposed to different database or different server.

I'm struggling to see much difference between TallCowboy's "daily consolidation" and the OP's idea of equiv. Master tables that are regularly refreshed.

Although duplicate data = bad, isn't it common practice to have read-only replicated tables on a different server for reporting (or other read-only) purposes? Obviously; the data is as old as the last replication so the designer has to consider the ramification of slightly old data, but this localizes remote-connection / design breakages into the DTS / Copy programs, not the report.

Not so much "versitile" as "stable". (at least - until the source system changes and you have to fix the DTS - or retest it each and every time it changes ... so I'm not saying it's a great idea, just maybe unavoidable).|||I'm struggling to see much difference between TallCowboy's "daily consolidation" and the OP's idea of equiv. Master tables that are regularly refreshed.The difference is that in my system the data is not imported on a daily basis "as-is", but rather the existing tables are updated with NEW data that we did not previously have. However, I see your point in that the difference may be perception (and/or semantics) and my understanding of what the O.P. is contemplating as an implementation.

In my world, the data we import (via web service or via remote access) is not a "synching" of databases. It's the importation of data from external providers, and placed into our own "consolidated" and normalized database tables (and used for our own devious purposes completely unrelated to the original source(s) of the data). At least in my eyes, a completely different process than importing actual tables from a remote database and keeping them in synch.

Although duplicate data = bad, isn't it common practice to have read-only replicated tables on a different server for reporting (or other read-only) purposes? Obviously; the data is as old as the last replication so the designer has to consider the ramification of slightly old data, but this localizes remote-connection / design breakages into the DTS / Copy programs, not the report. I can't speak for this one, because we don't do it. We either provide access to our data via controlled (read-only) linked servers and associated userid's, or via a web service that is a front-end to our database.

We absolutely don't allow uncontrolled access to our tables, even remote access via linked servers/userid's is typically controlled with a view. If there is time and/or the longevity of the projects dictates, we write a web service wrapper to the external world and make them keep their nasty little grubbies off our databases.

We expect (and generally require) external/remote databases to have (or support) the same indirect access. We don't want our applications failing because they changed a table on the external system.

Not so much "versatile" as "stable". (at least - until the source system changes and you have to fix the DTS - or retest it each and every time it changes ... so I'm not saying it's a great idea, just maybe unavoidable).ed-zachary :)|||i think this board has more of a tendency to slam the OP.

Really?

http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=67782&whichpage=8|||ROTFLMAO ... like Mastercard ... that was priceless :D

No comments:

Post a Comment