Skip to main content

Datapower – Balancing and Failover

 

Ok, this is a little more practical and technical than we usually get, but important nonetheless.

The IBM Datapower, as well as similar devices from Layer 7 and Cisco (and others) provide SOA security, attack prevention, and a number of ESB-like abilities (somewhat of an ESB lite).  However, these boxes tend to be EXPENSIVE, as well as having a series of add-on software modules that raise the price significantly.

The latter is a shock to many people.  One thinks of the Datapower as a physical device, like a network router.  While it is a physical device and much of it’s performance is based on optimized software placed in ASIC hardware chips, it’s also a very sophisticated software platform.  As such they’ve done the standard vendor thing of separating many of the sophisticated abilities as separate software add-on modules – adding on ability and adding on price.  For example (if I remember correctly), the ability of the Datapower to connect to a database stored procedure and expose it as a (secured) service is an add-on feature.

Because it’s an expensive device, and it’s advertised as a high availability device, and it’s considered to be a hardware platform like a router, and they want to isolate development/test from production, many organizations are taking one for development / test / QA, and one for production.  (And often none for disaster recovery!)

image

This is a mistake in two ways.  First, even high availability devices fail.  And if the Datapower (or other hardware based SOA security device) is layered into the security control of all SOA services, then if the device fails ALL SOAP WEB SERVICES (or at least all that go through the device for security, runtime management, and/or ESB lite abilities) are offline until the device is physically replaced.  Being it’s expensive, you can assume IBM doesn’t have them sitting around at the local office.

image Second, the device offers a multi-tenancy ability.  It has several ways to automatically separate logical instances from each other, and reasonable internal protection of one tenant impacting another.

For these reasons, it’s highly advised to cluster your Datapowers and use the multi-tenancy features to logically separate your development/test from production environments. 

And don’t forget disaster recovery!  If you are heavily reliant on this device and don’t have one in disaster recovery, then if you lose your primary data center you can be out of service for an extended time until a physical replacement can arrive and be reconfigured.  The basic rule is whatever it touches and whatever those services touch will be offline until it’s replace.  That could be a VERY big deal as integration has spread far and wide throughout the enterprise.

Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disaster – both in terms of th

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider