Skip to main content

You Measured What???

I had a very interesting meeting with a client a few weeks back. (For those who don't read regularly, I do high level IT consulting.) This client has been doing SOA for a few years. Actually though, they're doing SOI - Service Oriented Integration - not SOA, Service Oriented Architecture. Meaning they're creating and/or exposing lots of services from existing big box applications, and using web service technology to connect apps together.

After a couple of years of SOI they have several hundred exposed services and application interconnections. Their architects have spotted that some services have much higher reuse patterns than others and are trying to apply architecture standards such as standard entities and interface patterns to every new service - to up the reuse, decrease the maintenance, and move towards SOA architecture goals and ROI. This is a very natural step along the SOA maturity path.

Along comes a change in senior business management. They bring a new management model...all ROI (return on investment) all the time. They are told to make business decisions based on ROI.

So they were faced with a new project to replace a primary IT system. They selected a new system (the one with the most likely return on investment versus their current needs). The architects then spec out the total project cost to interface the new system in to replace the old system, remodeling every interface at a better level of granularity, standard entity structures, and interface pattern. It's a big system with lots of ins and outs (as a former manager of mine used to say, lots of gizintas and gizoutas). The interface project itself becomes a big number.

So the IT management, following the business directive, says "Give us an estimate of just hooking it all up. No standards, no concerns of granularity, no worrying about patterns, no concerns for future reuse. Forget about SOA, just use whatever technology makes each connection fastest. Give us the 'install the system the old way' but with the advantage of web services making faster connections." They get the estimate, it's 1/2 what the SOA methodology estimate is. They then ask if all the interfaces were a proper SOA architected granular standard entity today, how long would it take to interface the new system. They get an answer which is about 25% less than the "just connect it all up" estimate.

To illustrate:

SOA Interfaced Replacement System Integration = A
Quick Hook Everything Up Integration, B = A/2
If Everything as SOA Today, C = B - 25%

They then calculate that they replace this system approximately once every 8 years. (Actually they've only replaced it once before 8 years ago, but it's the best they can measure.) The interpretation of this calculation is there can be no ROI from the additional SOA work for 8 years, and then the savings is only 25% of the cost of doing it Quick.

Meaning the ROI of doing the project the SOA way is (Time of Replacement * (A / C)), or approximately 48 years.

Naturally under the new ROI decision model, the SOA project is not a reasonable return on investment. Push the architects out of the way and just let the developers hook it all together quick via whatever methods and structures.

This raises the question, what's an ROI measure of SOA?

Measuring ROI on IT projects is a well defined process. SOA projects however typically cross multiple traditional project boundaries, involve preparation for future reuse, and create an increase in software and associated business flexibility by design. These benefits render the traditional IT ROI methodologies ineffective.

To complicated it more, as steps are taken down the SOA maturity scale we find we're no longer talking about systems and applications, instead we're talking about business processes and orchestrated components. The traditional ROI methodologies with tight integration to systems and application projects don't fit the new model at all!

A few years ago I wrote an internal paper for my consulting firm on methods of SOA ROI. Among the methods I included were calculating Estimated Reuse, Indirect ROI, and (taken from SOA Magazine) was a Project Oriented SOA ROI method (by Leo Shuster). I noted IBM and TIBCO also provided some brief online tools to help (here and here, TIBCO calc link on the bottom right). Yet as I faced this conversation today, these methods also seem dated.

The ROI question is always bracketed by what you are measuring. In the early days of SOA the point of reference was existing systems, projects, and applications. Today even when we attempt to discuss projects and systems, we find ourselves bound by interfaces, connections, or rather feeds. Our process requires this bit of data or that transactional step to progress, each of which must be provided by a different resource which is outside the single system, application, or project space. We are coordinating, orchestrating, and connecting all over the place to complete our goal - a business process.

And what my client above lost in their project based ROI calculation was the opportunity cost of every future interaction with the replacement system. Every business process that interacts with that system will require 2x to 5x the effort to create a unique non-reusable connection, along with the security, monitoring, support, and lifetime of maintenance.

The ROI on the original project may have been met in 2 or 3 or 4 future business processes that have to interact with the new poorly integrated system. With the right measurement, the SOA ROI isn't 48 years, it's probably closer to 1 year.

To compound it, there's also the loss of future opportunity cost. This organization will not be able to take advantage of BPM (which can only seriously be used when there are granular, modular, standard entity transaction steps exposed to be combined into a new business process workflow. Nor will they be able to move functionality towards Cloud Computing. Nor even take advantage of standardizing devices such as the Datapower for moving the security and validation layer out to a managed hardware level (an early model of externalizing functionality towards Cloud Computing).

To explain this in IT terms takes a long essay and still might not succeed. However, these two pictures illustrate the idea directly. If your home electrical situation looks like this, then you're going to have some future opportunity lost and higher cost (and risk) in trying to touch it in the future...


(Electrical Pole - Fallujah, Iraq, by Michael Totten)


(Home Electrical Box - Iraq, by Alex Barnes)

If it looks like this, integrating new components is a snap...


(Home or Office Electrical Box - Cincinatti, USA, by Craftman Electric)

Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disaster – both in terms of th

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider