Skip to main content

What’s With Web Service Security???


Web service security is a tricky business.  EVERY service exposed by any service provider, be it .Net, Java, the Mainframe, or any other provider needs to be secured.  Certainly if it’s exposing sensitive data (say customer data), allowing activation of a business process, and most especially if it’s involving a financial transaction.

But how do you do it?  While every vendor and (almost) every technology announces compatibility with every web service security buzzword (WS-Security, SAML, X.509, etc.), they don’t describe how to actually make use of all this security data attached to the web service request.

I’ve had recent discussions with IBM, Oracle, and Software AG (as leading SOA middleware tool providers) on this exact topic and the results are disappointing.

The architecture model for this says that to provide SOA security I should use the tools as a SOA security layer, allowing my services to go about their business and the security tools to grab and process the security data added on to the requests.

This means, for example, that my service requesting systems could activate a SOA security module (provided by a vendor tool) and get an appropriate security context added on to the service request.  This might be composed of a WS-Security block with certificates or keys and a SAML block composed of requesting application context (instance information – production/test/etc) and user context (user name or user id or with federated identity management session id).

My middle step, the ESB usually, would automatically processes the security context, authenticate the source and authorize the requested action.  It would then make any calls to providing system with an updated security context.

The providing systems would filter the requests through a SOA security agent, which would perform the same security actions (process the security context, authenticate, authorize, log for auditing – triple-A security).

That’s a reasonable expectation for SOA security tools.  Now where are the vendors at?

Oracle used to be the closed to this, with Amberpoint offering agents for providers and a central-agent/server for environments that couldn’t handle agents or installations where agents weren’t desired.  However, Oracle has apparently removed security functionality from Oracle and built a new Oracle Web Services Manager tool that does not have agents.  Rather, it allows policies to be created and validates security as the services pass through the central security server or through security enabled Oracle SOA tools (such as their ESB.)  [They did say they intend to add agents for select environments, such as SAP and .Net, over the next year.]

IBM never fully detailed the model.  Rather, they allow Websphere Registry & Repository (WSRR) to define policies which can then be pushed to a Datapower (a physical XML firewall device), which can then act as a central security service doing the authentication, authorization and log for auditing.  The providing systems have to be manually programmed to only accept the requests from the Datatpower and the requesting systems have to manually generate the security header, completing the security picture.  This model is fine for external web service security (exposing internal web services across the firewall for internet requests) but isn’t so great for internal.

Software AG’s options are wider but more confusing.  Their design time repository, CentraSite, can define and generate security policies.  These policies can be pushed to the Mediator add-on for their ESB, which can then authenticate, authorize and log for auditing as service requests arrive at the ESB, or can be pushed to Layer-7 physical XML firewall devices {Layer-7 does have a “software appliance” as well} for enforcement like the Datapower in the IBM option.  Interestingly IF you have Software AG’s SOA runtime monitoring tool, Insight, the Insight agents can be tasked by the Mediator ESB add-on to perform authentication and authorization (with the monitoring tool already doing logging).

In all cases, the requesting system is responsible for generating the security information required.  No one (as far as I know) is providing an agent or client side plug-in that takes over the security layer tasks from the service requestor.

Now, it’s worth noting  that all the functionality mentioned above can be created manually without too much difficulty using the native features of ESB’s (whether from IBM, Oracle or Software AG, or others), and Java as well as .Net provide a host of included classes and API’s for handling web service security. (Even IBM CICS does nowadays.)

The question is can I just load in my web service security layer or do I have to take a tool from a vendor and STILL spend time, either on the requesting side, providing side, or both, to build the steps necessary to complete the web service security picture.

At the moment, it seems this is still the case.  No matter what tool combination is used, I’m still going to be defining the security standards I want to use and writing some code to inject those security protocols (or interpret them) into my web service requests.

10 years into web services this is disappointing.  And dealing with customers looking to increase the SOA maturity level of their environments as well as some late comers to SOA, they just don’t understand why this would be necessary.

Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disaster – both in terms of th

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider