Skip to main content

Clouds and SaaS

While IT analysts and pundits are busy declaring SOA is dead, SOA has failed, and the downturn killed SOA, the hype of 2009 is Cloud Computing and the resurgence of Software as a Service (SaaS). (Microsoft Azure being a prime example.)

In my discussions with my corporate clients, as well as from my own extensive corporate history, I'm finding that allowing key corporate data and processes to leave the walls of the company controlled data center is the main mental barrier to SaaS and Cloud Computing. Even though companies are outsourcing business processes and the associated data that goes with them - as well as outsourcing some applications to hosting providers - the thought of deploying their applications to an amorphous cloud and depending upon the vendor to just "support and provision it appropriately" is a mental leap they're not yet prepared to make. Similarly, relying upon services that a vendor will just "support and provision appropriately" is a similar leap of faith they are not yet ready for.

While corporate management has become more and more comfortable with Business Process Outsourcing - and telling the IT guys to "just interface with them" - most IT management has not yet made a similar jump. This may be due to IT management struggling with a definition of what they provide, what is their core competency. Few companies outsource their core competency, doing so would invalidate their existence. Most IT management still defines data center operations and computing resource provisioning as part of their core competency, rather than a focus on maximizing the automation of core corporate business processes.

Being that IT environment stability is a key IT operating factor, this is understandable. The question will be whether SaaS and Cloud providers will be able to create environments of matching reliability (to traditional corporate data centers) and provide sufficient business guarantees to make savings, flexibility, and dynamic provisioning worth the risk. (What risk? The career risk to IT senior management in case of failure, or simply the risk of the unknown.)

As always, some bleeding edge companies and managers will give it a go. If it goes well, they will gain a strategic advantage. Leading edge companies will gingerly dip their toes into the water, trying a few projects and evaluating the possibilities for future potential. Mainstream companies will sit on the sidelines and wait for the early adopters to shake out the bugs, especially in 2009's tough budget environment.

Related Link: Integration is a Thorny Issue for SaaS at Mergers and Integrations by Loraine Lawson.

Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disaster – both in terms of th

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider