Skip to main content

Query Service or Synchronized Data?


- When linking two systems, with one system providing data to another system, the providing system's physical environment must be sized for the capacity of the requesting system and the reliability of the requesting system. Another way of saying this is that the providing system must meet or exceed the quality of service or SLA of the requesting system.

- In the example I was reviewing, the HR system was the base for the desired information and was sized for the user capacity of the HR department and the reliability impact of an outage of the HR department. If it is to be used as a real-time providing system for Department B, it's capacity must be increased from the HR department (10 users) to the capacity of the Department B user base, the main business area (3,000 users). It must also have it's redundancy increased to provide no outages.

- Alternatively, when there is a mismatch between capacity and reliability of the providing system and requesting system, the data may be effectively de-normalized by building a copy or synchronization mechanism between the systems – the providing system sending a copy of the needed data set into the requesting system for local use. This may also be appropriate if the systems operate in different security domains, in different networks or network segments where bridging is a problem, or where the systems are separate by geographic areas where network performance [speed or capacity] are an issue. It's also frequently required where a packaged application is the requesting system, as most packaged applications will only query the data in their expected local format and cannot be redirected to a web service or other remote query.

- One other concern in such integrations is the lifecycle of the information. Is it static, reference information, infrequently updated, frequently updated or transactional or computed? Frequently updated information has dangers of synchronization problems or being out of date, and transactional or computed items can only be queried from their source system.

We generally hear that de-normalizing, whether in the database itself or across systems, is a crime (across systems would make it an integration crime?)  Yet the circumstances above can make it preferred or even necessary.  This is not a bad thing when done for the right reasons. and in a reasonably reliable way.  (Noting that every synchronization operation should have a full re-sync option that should be run periodically [monthly or yearly or as appropriate].)

Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disaster – both in terms of th

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider