Skip to main content

MDM & SOA - Layer, Repurpose or Replace?



An Architect Friend sent me this extended architecture question...

I recently joined a company that provides business consulting (via many MBAs) services related to sales and marketingMost clients are large pharma companies.

In addition to consultants, there are business process outsourcing teams (offshore) that do operations (like incentive plan management, report distribution, etc.)  There is also BI/reporting group that creates BI/DW solutions (custom ones using template approach) for large clientsPlus there is an Software Development group (SD).

Over the years the Software Development group of the company created various (10+) browser-based (.NET/SQL) point-solutions/tools to help consultants (and eventually some head-quarters users) perform specific tasks. For example:

- Designing sales territories and managing the alignment of reps to territories
- Custom ETL-based tools to perform incentive calculation
- Some Salesforce-like platform for creating custom form-based apps
 
The applications are architected as single-tenant – with some deployment tricks to be able to deploy an “instance per client” on the web servers. The databases are isolated per client/instance.  The tools are sold as if they are part of an integrated suite, but they aren’t natively integrated and require custom integration.

There is a custom grown ETL-like tool for interconnecting the tools to each other (but not standard connections since the data models are all “flexible” and not well defined) plus Informatica and Boomi to get data from clients.  Some clients use one tool, some use 2, some use 3, etc.  Some tools are used directly by the client, but most are used by the consulting teams on behalf of the client.
 
Lately, there is desire to make it all “integrated” across the company (SD + BI + all else)Two main themes are emerging (even prior to me joining): “common data model” and “SOA”.  There is also the question of letting existing applications function as-is and developing new ones on a more proper architecture versus trying to evolve the existing apps.
 
However, the understanding of how this applies to an Enterprise looking inward on its own systems and trying to align them, versus Independent Software Vendor (ISV) looking to build software for other Enterprises did not yet sink in… and concepts are being confused…
 
The tension between a standardized productized software versus customized (consulting company) software solution is not yet resolved.


I wanted to ask if you had experience in environments were an ISV was trying to define the enterprise architecture of their solutions for customers versus their own internal architecture.

Are there any case-studies or resources you could point me to get some reference architecture examples?

I usually do not like “next gen” approaches, but I am not seeing much potential in evolution of existing assets into an integrated state (they have a lot of “baggage” and features that were there but don’t play nicely with “integrated” world-view).
=============

Here's my answer:


I wanted to ask if you had experience in environments were an ISV was trying to define the enterprise architecture of their solutions for customers versus their own internal architecture.

-        No, though I have built integration competency centers and projects that were providing service environments across very large scale enterprises of disparate divisions.

Are there any case-studies or resources you could point me to get some reference architecture examples?

-        Not that I know of.  I'm not much of a fan of such studies, mostly because the requirements and details are always highly complex, and those details directly affect the approaches taken.  Studies and reference architectures provide a nice high level structure – but the more you try to keep to them in the details the less effective they are (as they are mismatched to the exact situation).  I use bits of Togaf-9 from opengroup.org, bits of CBDI from Everware http://www.everware-cbdi.com/, and various tidbits picked up from Zapthink (though every few years they discuss the benefits of yet another framework).

I usually do not like “next gen” approaches, but I am not seeing much potential in evolution of existing assets into an integrated state (they have a lot of “baggage” and features that were there but don’t play nicely with “integrated” world-view).

-        It's a pretty standard problem: how to balance between what is, how it can be extended / expanded / reused, and what should be replaced / redeveloped / moved up to a new generation of technology, pattern and features.

- The problem you describe sounds like it crosses between SOA / integration and MDM (master data management).  Sometimes a SOA façade can provide an MDM operational model, with composite services doing multi-system queries, combining or rationalizing the result, and presenting single meaningful "views".  In other cases it's the SOA abilities enabling MDM to do it's job, which often involves signification bi-directional synchronization.

- The MDM tools tend to be heavy, and the business and systems analysis work (which system wins when data is in conflict, for example) is a major portion of the success or failure.

- That said, IF you are only trying to get views of the data, I am hearing reports of good success with some of the easier BigData tools (such as MongoDB).  Success meaning they are able to develop and deploy meaningful business results in months, whereas MDM and big integration SOA projects almost always take over a year.
 

Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disast...

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology...

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider ...