Skip to main content

Along the SOA Tipping Point


When Anne Thomas Manes (of the Burton Group) famously declared in January of 2009 that "SOA is dead", everyone rushed around to understand what she meant. Being that a year later she's still giving presentations on SOA Governance and other SOA topics, clearly she didn't mean that SOA was a failed technology. (There are plenty of IT technologies that come along with much hype but never quite translate into practical usage patterns or benefits for Enterprise IT, and therefore fade away as quickly as they arrived.)

Today when I'm talking with IT organizations the majority are doing some level of SOA. So clearly SOA has moved along the adoption curve. The innovators struggled with it but got and touted their early advantage. The early adopters picked it up and integrated it into their enterprise IT model.

We're clearly past even the early majority and a good way into the late majority. The late majority are organizations that 2 years ago weren't considering SOA, organizations that have little drive to change or business requirements that stability be a strong priority, i.e. utility companies, government IT, or back office military IT. Today all these organizations are either beginning to use SOA technologies, run actual SOA projects, or find themselves in a bottom-up situation where SOA tech has been implemented at the lower level in some projects and need to begin to rationalize and manage the results.

As an example of this, I visited a customer who's mainframe department was very enthusiastically talking about how they'd exposed over 100 web services directly from CICS [as IBM's CICS 3.1 and Enterprise COBOL allow relatively quick and easy exposure of transactions as services]. A year earlier these same people were shaking their heads over newfangled XML, and mainframe connectivity was a carefully managed process using MQ or other various gateway tools. When the mainframe guys are excited about web services and exposing modular functionality, it's a clear sign SOA has passed into the Late Majority.

However, while most IT shops are now doing some level of SOA, few of them have embraced or implemented the methodology, IT management, and IT-business interaction changes that are necessary to gain most of the benefits of SOA. SOA has succeeded as a technology set but the accompanying people changes have not penetrated. And most IT organizations are losing their SOA benefits because of this.

SOA technology without methodology is a net loser. (Not the methodology of low level integration pattern methods, rather the methodologies that bring changes to high level architecture, process modeling, IT-business interaction, and IT management methods.) Everyone (mostly) is doing it because the need to live-connect systems and the practical spread of the business processes across systems must be handled - and SOA technology offers a relatively easy way (from a pure connectivity and composition standpoint) of doing so. But (mostly) everyone is also complaining about the new problems it brings. (The problems by themselves are a good topic for a separate article.)

For SOA to succeed within the organization, the organization must adjust it's software development lifecycle (SDLC) and IT management patterns to accomodate and MAXIMIZE the new technology pattern. To date, few enterprise IT shops have done so.

IBM expresses this in their Rational Rules for Software Development, in typical engineering speak...

Rational System Engineering
The Six Principles of System Development
Rule #6

Development Organization should Reflect Product Architecture.

“Technology dictates a change in architecture, and organizations that do not adapt experience a loss of productivity and effectiveness…”


How should the organization adapt? Not having seen a simple guide to do so, I think that will be my next series of articles.

"How to Adapt your IT Enterprise to Get Positive SOA Value". This is an IT management problem, and IT architecture problem, a project management problem, and a SDLC problem.

Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disaster – both in terms of th

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider