Skip to main content

Batch Out to Web Services?


Calling web services from the mainframe has become a frequent question.  But as applications (and data) may be migrated off the mainframe to apps now hosted on Linux or Windows servers, the old trustworthy batch jobs may suddenly need to access remote systems and web services to do their job.

Here’s how one person phrased their problem…

We are currently looking at doing a partial migration away from a MainFrame.  Some of the functionality written in Mainframe Cobol and is called from Mainframe Batch programs.  We would like to move these cobol programs off the mainframe.  Question - If we moved the functionality in the cobol programme to a Java or .Net web service, is the a way to call this web service from a Mainframe batch programme?

Technically this is an easy answer.  Yes, web services can be invoked from the mainframe.  They can even be directly invoked from CICS and from IBM Enterprise COBOL (as of CICS TS 3.1).  There are some technical limitations to this, Enterprise COBOL web services cannot deal with complicated XML structures and all XML data types – which can make it a challenge to call pre-defined web services with modern standards.  But if the web services are being created directly to service the Enterprise COBOL call, no problem (technically speaking).

Architecturally, this type of batch web service invocation does have a major flaw.  Anyone doing batch programming knows that database commits can cause significant performance problems for batch, and therefore careful management of the database commits (and other database activities) are part of every batch implementation (commonly commits are only done every 100 transactions or more).

Similarly, every web service invocation has an overhead cost.  Multiply this by tens of thousands or hundreds of thousands of transactions and your batch process will spend most of it’s time waiting to make the web service connection.  And that time may run to hours or more.

The solution is similar to the database commit approach.  The web service must be designed to pass multiple transaction requests through a single invocation.  The communication connection is made and an array of transaction requests (in mainframe COBOL speak) or a list of SOAP documents (in web service speak) are transmitted during the connection. 

Naturally the receiving web service must be designed to handle multiple transaction requests in a single invocation, and practically this is not a problem in any modern environment (such as Java or .Net).  It is an unusual pattern that most don’t consider, but even in most normal circumstances there is no reason that a web service shouldn’t handle multiple transactions included in a single request body or multiple request bodies in a single communication instance.

This is the only HTTP SOAP based approach to this problem.  Other alternatives include queuing, loading all the requests into a messaging system (such as IBM Websphere MQ Series), with the processing system having a reasonably large thread pool for pulling and processing the messages as they arrive.

These ‘mixed environment’ batches are already very common, and many organizations have no intention to move away from the ‘large processing job’ approach.  As resources spread even farther and into the cloud, this problem will grow ever more ‘interesting’.

Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disaster – both in terms of th

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider