Aug 26, 2013

NSA Tapping and Cloud Computing

th (4)Recent revelations from the U.S. have informed the public that conspiracy theorist fantasy's are all too real, the U.S. National Security Agency (NSA) has been installing taps at key Internet points to absorb vast quantities of email and Internet traffic.

As IT professionals, publically available information (no inside information or secret information is utilized in the preparation of this article) and an understanding of the Internet as a series of routers and servers, we understand that “taping key Internet points” means copying streams of Internet traffic via router configurations and having monitoring software installed on email servers and the like (directing copies to the government monitoring servers).

The NSA isn’t secretly tapping into some sort of vast Internet cable bundles.  Rather they’re walking into AT&T, Google, Yahoo, Microsoft, Internet backbone and primary service providers, and installing software on their routers and servers as well as installing NSA receiving servers on their networks and premises. 

A few months ago ZapThink published an article questioning why the move into Cloud Computing via Public Cloud Vendors has been relatively slow…

Cloud Computing: Rethinking Control of IT : Jason Bloomberg, April 24, 2013

In my role as a globetrotting Cloud consultant, I continue to be amazed at how many executives, both in IT and in the lines of business, still favor Private Clouds over Public. These managers are perfectly happy to pour money into newfangled data centers (sorry, “Private Clouds”), even though Amazon Web Services (AWS) and its brethren are reinventing the entire world of IT.

Their reason? Sometimes they believe Private Clouds will save them money over the Public Cloud option. No such luck: Private Clouds are dreadfully expensive to build, staff, and manage, while Public Cloud services continue to fall in price. Others point to security as the problem. No again. OK, maybe Private Clouds will give us sufficient elasticity? Probably not. Go through all the arguments, however, and they’re still dead set on building that Private Cloud. What gives?

The true reason for this stubbornness, of course, is the battle over control…

Why do (IT) executives crave control so badly? Two reasons: risk mitigation and differentiation. If that piece of technology is outside your control, then perhaps bad things will happen: security breaches, regulatory compliance violations, or performance issues, to name the scariest.

(The article continues why this isn’t really true and mitigated through SLA agreements and gives you the advantage of separating the responsibility and control.)

ZapThink misses the major point in my mind. 

There’s been a variety of Cloud Computing articles superficially discussing potential legal complications of a corporation having some of their business data in a different legal or national jurisdictions from their business.  And we’ve seen some practical challenges of major US service providers being hit with EU privacy standards violations (for example).  What, for example, would happen if a customer sued the Cloud Vendor for “deletion rights” (an EU data right) for a EU customer who had signed up with a US Internet service provider that happened to use an EU based Cloud Resource Vendor for their storage?  IT executives naturally shudder at the business / legal complexity of data crossing state / national / international borders.

With the NSA monitoring revelations, we see much worse concerns.

In the case above, I could end up in a lawsuit outside my jurisdiction.  If I’m a small company, this could be catastrophic (if I’m a large corporation, only ridiculously expensive).  But at least all I have is a legal risk.  As long as the Cloud Resource or Platform Vendor is living up to their contractual responsibilities and technical features, my data and computing results remain private and controlled – though now I have the additional party of the Cloud Vendor in the mix.

With NSA monitoring, the Cloud Vendor may be forced (or coerced or tempted with financial payments) to provide monitoring access without being permitted to notify me (fully legal).  My company would thereby have no legal recourse to attempt to protect our data because we wouldn’t even know such monitoring is occurring.  (In instances of the monitoring revelations, it’s also become clear that the NSA is doing so with “secret court” orders which prevent the service providers / vendors from letting anyone know such monitoring is being requested or occurring.)

Suddenly the controlling or paranoid IT executive is looking smart. 

We now have established the fact that if your data leaves your premises, it may be secretly tapped / copied / monitored, and you’ll likely never know (thereby offering no legal recourse to challenge it).  And while we certainly want our national security resources to be able to do their jobs and provide national safety, we also know that such authority is subject to misuse – and misuse of key company data could cost millions or even billions, or put a company out of business.

When police or investigative authorities arrive with a court order, we may legally challenge it as well as trying to keep the access as narrow as possible.  Even further, it may be our IT people providing the data (so we know exactly what’s leaving and what the potential business impact is).  If the NSA is monitoring or accessing Cloud Vendors, our data is leaving without any control or even knowledge on our part.  Our business risk is potentially unlimited.

The advantages of the Cloud now carry a real risk.

For personal use, cloud services now carry the same risk.  If you’re tying your Android phone to Google account sync or your iPhone to the iCloud, your contacts are now (probably) being monitored.  How about if you’re using a Cloud backup service or online file storage (Googe Drive, Microsoft SkyDrive, etc)?  We don’t know, but with recent revelations, we’d be foolish to assume the data is being kept private from national security authorities – which included revelations that NSA employees used the service to spy on personal love interests.

If it leaves your premises and it’s not encrypted and kept encrypted at the destination, it’s only appropriate nowadays to assume it’s being monitored.  And if it is encrypted, you may warrant special attention.

Welcome to the digital age.  Your government is now online.

Aug 20, 2013

The Reality - Email Privacy or the Lack Therefore

Today a top law blogger freaked out after realizing that their emails can be read and monitored...

"The owner of (a encrypted protected email service that just shut down) tells us that he's stopped using email and if we knew what he knew, we'd stop too.  There is no way to (blog) without email. Therein lies the conundrum.  What to do?

...the simple truth is, no matter how good the motives might be for collecting and screening everything we say to one another, and no matter how "clean" we all are ourselves from the standpoint of the screeners, I don't know how to function in such an atmosphere.

I feel (unclean), knowing that persons I don't know can paw through all my thoughts and hopes and plans in my emails...  They tell us that if you send or receive an email from outside the US, it will be read.  (And many emails inside the US are accidentally picked up by those capture engines.)  If it's encrypted, they keep it for five years, presumably in the hopes of tech advancing to be able to decrypt it against your will and without your knowledge.

I hope that makes it clear why I can't continue. There is now no shield from forced exposure. …no one can feel protected enough from forced exposure any more to say anything the least bit (controversial or security related) to anyone in an email, particularly from the US out or to the US in, but really anywhere. You don't expect a stranger to read your private communications to a friend. And once you know they can, what is there to say?"

Much of the Internet and the abilities we take for granted today were never conceived to be the large world-wide network they have become today.  Email, as used today, and it’s base protocol (SMTP) are not encrypted and follow the normal network routes to get where they are going.  The term “email” is a mistake, because it’s NOT a sealed letter – it’s a open postcard (an open sheet of paper with an address on it).

This means:

- The sending post office and receiving post office can and do read the sending and received address AND the full content.

- Every network it passes through along the way (remember, the “Internet” is all cross connected networks”) can also capture and read the sending and receiving addresses AND the full content.  (Today the average number of networks things pass through are 8-16.)

- All the little people along the way that operate all the stuff that makes this happen, such as system administrators, database administrators, network administrators, have all the tools in front of them every day as part of their jobs to read any of this stuff they want to.

That’s been the case from day 1 with Internet email.  NO public free email service NOR any service offered by any Internet ISP encrypts email traffic or storage.  It’s all traveling and sitting on open unprotected readable pieces of paper (so to speak).  The ONLY protection has been “privacy policies” of the companies and them being shamed (and losing customers) if they didn’t provide a reasonable semblance of isolation of your emails.

Even so, YOU have voluntarily given them the right to “paw through your data” since day one, and much worse!

- Your emails are being automatically scanned by Google / Yahoo / Microsoft, "to serve you ads and understand their customers."  Note the ads displayed on the email pages are context sensitive to the email being read, this doesn’t happen by magic but by your email being auto-scanned for keywords.

- Your phone location (meaning your body’s location) is being sent to Google and Apple, "to provide better mapping services and location based app responses (letting an app know where you are to tell you about something near by)."  You can’t turn on GPS / Location Services without giving them permission to track you moment by moment. 

- If that’s not enough, the cellphone service provider logs every call, who you called or who called you (number on both ends – which you see on your phone bill) AND the approximate location of your phone when the call began (tracked by which cell tower you connected to and the strength of your signal, meaning how far from the tower you are – combined with where the other towers are near that one this can be used to narrow your location to within 1/10 of a mile).

- Of course, every SMS is logged (source and destination) as well.

- Your office computer is tracking what web sites you browse, in some cases what programs you are running, in some cases even how fast and what you are typing, and reporting it to your IT department and/or your boss.  If you are visiting whatever your office considers improper sites, the IT department and boss are being alerted.

- Your office emails are being scanned for improper words and phrases (sexual, harassing, racist, violent, threatening), alerting the IT department and your boss if there's a hit.  They may also be scanned to see if you are sending out company proprietary information.

- Anytime you access any Google or Yahoo or Microsoft service, or for that matter any web site that chooses to collect the data, they know from where you did it (by IP address – which if correlated with the ISP will give a physical location down to where the connecting router is or actual cell location) and from what type of equipment you did it.  Are you at home or in the office, on a desktop workstation, laptop, iPad or Galaxy phone, running Windows or a Mac or iOS or Android, which browser, and what size screen.  This information can be obscured (with special fully legal utilities) except for the IP address, which can be via a VPN or anonymous service (also fully legal).  But few people do so (and because few people do so, there’s a suspicion about people who do.)

Nothing is new with any of this.  These are all a basic part of the operating model of all of these services from day one.  If you wish to avoid it, you have to avoid using these services.  (There are a few alternatives that theoretically offer protection, privacy or lack of logging – naturally such services are costly.)

However, there has been a FAÇADE of privacy and protection.  And there has been a general thought that given the huge amount of such data, there may be anonymity and protection through obscurity.  The façade has always been false – the data has always been easily accessible.  And if it exists, police or courts or other authorities will go after it if it’s in their interest to do so.  (There are companies who intentionally limit their logs of such things to 2 weeks or 3 months, to prevent them being called into any court case.)

The big deal now is that the U.S. government has joined the game and is, via various means, grabbing or monitoring some of the data above.  And with new levels of computer processing power and “BigData” huge data set analysis techniques, Big Brother isn’t occasionally poking in to take a look – he’s constantly monitoring.

Given that the companies involved have basically being doing the same since day one, the primary change is that the façade has been blow away.

There are ways to reduce exposure without completely eliminating use of such services.  But if you’re using Google or Microsoft or Facebook (or other such web sites or phone services), assume EVERYTHING you do is completely 100% public.  Email, chat, SMS, sites you browse, pictures you share – it’s all being tracked … correlated, profiled.

Being we talk of BigData capabilities and BAM – Business Activity Monitoring, we in the IT industry shouldn’t be surprised.

Jul 2, 2013

How Granular? Or, What’s a Business Service?

What is a Service or Component?  It sounds like a simple question.  But I was approached by an organization that had done no component or service oriented or even object oriented development.  So the question was much more complicated…

What should a service / component / object be?  What amount of business functionality should it include?  Or, in technical terms, what granularity to your business services or components?

This question came up again for me, as I’m now dealing with the same question relative to Business Processes (BPM or BPA workflows).  That will be a future post.

But for how granular should a service or component be?  Here’s my presentation on the topic…

Jun 24, 2013

Batch in the age of Java, SOA and Event Driven Business


My customer presented me with a major enterprise architecture task.  “We’re building new systems in a new environment, it’s time to modernize our batch processing.  Please recommend a toolset and processing model to move our legacy batches to our new environment.”  I didn’t realize it at the time, but this was a trick question.

I spent an extensive amount of time investigating modern batch tools as well as Java development and batch.  What I found was a very limited set of such tools, but more important as I consulted various architects I know around the world, the question of “why are you trying to modernize batch?” kept arising.

The question is less straightforward than it sounds.  SOA, a component orientation, real time business and event driven business make batch exceptionally challenging, but more importantly much less relevant.

Attached is the result of my research, a whitepaper on a modern approach to batch in light of Service Oriented Architecture, Real Time processing, and a more Event Driven business approach.

Modern Batch Directions - Moving from Legacy Batch to Service Oriented - Component Modeled - Event Driven Batch...

Feb 5, 2013

Is the Change the Project?

th (6)

(Large Government Agency) has begun one of the largest government IT projects in (Country).

(Large Government Agency) has a strong desire to move into the future, provide more efficient and user friendly systems with web and mobile interfaces, and move from 2nd generation code based systems (requiring over 150 maintenance programmers) to a 5th generation rules and workflow based system.

Unfortunately failure to upgrade their software environment and approach in the past left them with not only 2nd generation code and user interface (NATURAL by Software AG and 3270 green screens running in an IBM Mainframe environment) but with a 2nd generation architecture. The mixed screen & processing logic, pure mainframe batch processing model and narrowly specialized data model was judged unable to be extended for use as a transaction engine or even be service enabled. Certainly no part of a 3270 green screen 80x25 character based user interface can carry forward into a web model.

As a result (Large Government Agency) made the radical decision to THROW AWAY their complete existing application set and REBUILD THEIR FULL APPLICATION ENVIRONMENT. Project (Rebuild) at (Large Government Agency) is a 10 year project to replace each (Large Government Agency) business application set with a newly analyzed, architected, and developed application. The first application set is scheduled to go live in January, 2014, and will replace the Customer Data Management system, the (Main Business Process A) system, the Payment Processing System, and will create new systems for (previously non-automated high-personnel-overhead business process), Electronic Document Management (moving (Large Government Agency) from paper and file driven processes to fully electronic processes), and a new (customer submission process) Website.

The project involves over 120 consultants working in the (nice new) Technology Park in (major city) near the (Large Government Agency) headquarters but purposefully not in it. The project is not just creating new software but working on modernizing and streaming business processes, where possible given the complex laws, regulations, court decisions and labor union limitations.

Building a modern application from scratch, the project management has decided to go with a totally modern approach, meaning: True Service Oriented Architecture (not just web services, modular self-contained business step components), Rules Based (FICO Blaze rules engine), Event Driven, Workflow Process Controlled (Software AG Webmethods BPM engine), Java and IBM Websphere Application Server Java Enterprise Environment development environment, and DB/2 10 with a Hybrid-XML Data Model for complex yet easily adjustable data structures designed for fast reading of full data sets for rules processing.

~5% of the citizens of (country) interact with (Large Government Agency) on a regular basis. Today it takes an average of 142 days and 5 visits to (Large Government Agency) offices for those citizens to complete their interaction. This project will make the process both easier, faster and less expensive for (Large Government Agency) and for the citizens.

Of course, projects of this scope have great risks and a history of failure. A constant customer feedback process, agile development with independent modular deliverables attempts to minimize the risk. A rigorous testing, stress testing, and user testing schedule will further reduce the risk.

Yet, even with these project steps to reduce risk, an organization that has avoided any major systems or application changes in a human generation faces enormous cultural challenges in accepting such an upgrade. Cooperation between the business and IT has been focused around providing service (as a utility), not around explaining and automating processes. The IT infrastructure teams are used to managing equipment capacities several factors behind the current generation. Managing the cultural challenges within the organization and within the IT organization may be even higher risk than the large development scope of the project.

That said, (Large Government Agency) has hired teams of top tier consultants and given the project a realistic time frame and sufficient budget to make it happen. And perhaps most importantly, management has realistic expectations.

(Large Government Agency) has a reasonable likelihood of step by step success.  If dealing with change is difficult in most organizations, dealing with change IS the real project in this organization.

Jan 29, 2013

My Internet’s Too Fast

th (5)

Whether we’re integrating systems or building new SOA architected applications (with heavy cross-component communication that may be flowing between physical machines or across virtual machine clusters – which again drop down to separate physical machines), the network backbone makes a significant difference.

A good enterprise data center is running at least a 1gbit backbone, with portions or major interconnects at 10gbit.  Further, those good network engineers are properly segmenting the network, which makes sure the heavy traffic patterns have maximum capability (and aren’t going to be slowed down by users streaming internet radio).

Between applications in the core of the data center, we rarely run into network capacity or network speed as our major performance problem.  (Credit those network engineers.)  This doesn’t mean the issue should be ignored, if some applications are integrated in a high speed query or transactional pattern, there can definitely be a major performance benefit of making sure their connection moves up from 100mbit to 1gbit or from 1gbit to 10gbit.  Poorly architected SOA services and application integrations that have many granular transactions and/or heavy crosstalk can see a surprising performance benefit from a network speed upgrade or even just creating separate or multiple network channels (spreading the communications between them across multiple physical network ports).

Somewhat surprisingly, those easier options and solutions are becoming LESS viable with the move to virtual machine environments (and for mainframe integration), where one or a small number of physical network ports is shared among a large number of virtual machines or LPARs.  In these cases, increased network speed (with associated faster ports) is the only option.

This is a lower level physical problem that many an integration or SOA team will miss in trying to diagnose integration performance problems.

Interestingly, I was brought to this topic by a recent Internet upgrade at home.  The cost of high speed has become reasonable and it’s availability spread across wide areas.  I upgrade mine in the past week and was surprisingly disappointed.

Doing a number of checks with, I found I could get from 50% - 90% of my purchased speed to test points in my nearby area.  Outside my area the numbers are all over the place…

New York – 70%
Paris – 10%
London – 40%

What’s going on?  First is a basic problem… how’s my router performance?  When we were working with 2mbit and 5mbit Internet, putting a slow cheap processor in the home routers was sufficient.  Working with 10mbit, 30mbit or 50mbit Internet needs a LOT more router CPU power.

Second, just like in the data center example above, IF the routers or network paths I’m going through to get to the destination are over 50% of their capacity, my performance is going to suffer.  And having some Internet hosting accounts for private and family use, I can certainly note that my hosting providers have NOT upgraded the capacity of those servers.  (They may even have only 100mbit network ports.)

What does it mean when the default “low” speed for home users is becoming 10-20mbit, with low cost options of 20-100mbit INTERNET, yet the servers were built to serve 100 users at 1mbit each?

It means I’m not going to get 100% of my purchased speed because the sites I visit can’t deliver it.

My Internet is too fast.  Sad smile

Sep 24, 2012

Query Service or Synchronized Data?

- When linking two systems, with one system providing data to another system, the providing system's physical environment must be sized for the capacity of the requesting system and the reliability of the requesting system. Another way of saying this is that the providing system must meet or exceed the quality of service or SLA of the requesting system.

- In the example I was reviewing, the HR system was the base for the desired information and was sized for the user capacity of the HR department and the reliability impact of an outage of the HR department. If it is to be used as a real-time providing system for Department B, it's capacity must be increased from the HR department (10 users) to the capacity of the Department B user base, the main business area (3,000 users). It must also have it's redundancy increased to provide no outages.

- Alternatively, when there is a mismatch between capacity and reliability of the providing system and requesting system, the data may be effectively de-normalized by building a copy or synchronization mechanism between the systems – the providing system sending a copy of the needed data set into the requesting system for local use. This may also be appropriate if the systems operate in different security domains, in different networks or network segments where bridging is a problem, or where the systems are separate by geographic areas where network performance [speed or capacity] are an issue. It's also frequently required where a packaged application is the requesting system, as most packaged applications will only query the data in their expected local format and cannot be redirected to a web service or other remote query.

- One other concern in such integrations is the lifecycle of the information. Is it static, reference information, infrequently updated, frequently updated or transactional or computed? Frequently updated information has dangers of synchronization problems or being out of date, and transactional or computed items can only be queried from their source system.

We generally hear that de-normalizing, whether in the database itself or across systems, is a crime (across systems would make it an integration crime?)  Yet the circumstances above can make it preferred or even necessary.  This is not a bad thing when done for the right reasons. and in a reasonably reliable way.  (Noting that every synchronization operation should have a full re-sync option that should be run periodically [monthly or yearly or as appropriate].)

Sep 20, 2012

Categorize or Search?

SOA design time governance products come with a variety of methods to categorize the services or assets.  Interestingly I’m currently working on a project involving a document management system, and the select document management tools comes with almost an identical selection of categorization methods.  These include trees, taxonomies, and domains among others.

In some of the earlier SOA design time governance implementations I performed, we spent significant time working on the categorizations – trying to make the catalog and information easy to traverse for the various user categories that would encounter in.  (In the case of design time governance, this might be analysts, architects, developers, QA, and IT management.)

We invariably found designing the categories and approaches took a tremendous amount of time, with every constituency having different ideas and requesting various adjustments to the approach.  To some extent it became the never ending quest for the perfect structure, never to be found.

We saw this same pattern emerge and fail among the World Wide Web.  For those with a little Internet history behind them, they may remember the early Internet Indexing + Search Sites, such as Lycos, AltaVista, Netscape and Yahoo.  Each presented various approaches to indexing, categorizing and presenting the Internet in various taxonomies.

One day a newcomer arrived named Google who presented one simple function…search.  Within a short time all the indexing and taxonomy sites were dead.

The way we do things is limited by our technology.  When we’re doing things manually, our technology may be bookshelves or filing cabinets, paper index cards and human retrieval methods – or even human memory capacity.  As we automate processes with newer technology, it’s perfectly normal to take the previous process and “enhance” it with the new technology – but the previous process still exists.

Once we truly understand the possibilities and capabilities of the new technology, we often supersede the previous process.  In the case of cataloging our information, our models came from books, libraries, index cards, filing cabinets, etc.  Even if we were storing our information in high speed relational databases, our access models were based on our previous process – catalogs, sorted indexes, grouped information.

Yet Google has shown us that model is significantly less efficient and of less value that a capable search. 

When we’re modeling today’s software abilities, user interfaces and data organization approaches, search should be the first and primary approach.  Categorization is just pushing the old much less efficient approach forward.

Mainframe Integration–NATURAL Web Services

Software AG’s Natural language and development environment for the mainframe (IBM z/OS and CICS) offered many nice improvements over COBOL for mainframe software development.  Software AG developed a nice market niche in the mainframe development tools market, and millions of lines of code were developed and continue to run till today.

Like IBM’s version of COBOL, the vendor has struggled to extend Natural’s lifespan and the life of the code written in it.  Initially Software AG created a communication bridging tool (called EntireX), allowing bi-direction communication between the Natural environment and Web Services or MQ or Java or .Net. 

But like IBM came out with “native” web services as part of CICS 3 and Enterprise COBOL, Software AG has web service enabled Natural.  And web service enablement includes the handling of (simplified) XML. 

From the code sample I was able to find, it appears that Natural IS NOT handling the building of the HTTP headers, requiring them to be manually added to the top of the XML document – a rather odd lack, but use of direct web services significantly simplifies integration into the environment.

Here’s the code sample I found, which was used as a base to build some internal test services by a Natural programmer I know…

Description :
This example demonstrates how to call a SOAP service from Natural, using the REQUEST DOCUMENT statement, it then parses the output with the PARSE XML statement and presents a formatted view of the response.
The code is cross-platform, it works on both mainframe Natural as well as OpenSystems / LUW Natural alike.
Service used:
Input asked for by the Natural program: City, Country
Output: weather details for the selected City + Country, or an error response


* more info about the services to call:
* better cross platform solution
1 #REQUEST              (A) DYNAMIC
1 #RESPONSE             (B) DYNAMIC
1 #RC                   (I4)
1 #PATH                 (A) DYNAMIC
1 #NAME                 (A) DYNAMIC
1 #VALUE                (A) DYNAMIC
1 #ACTIVE_NAME          (A16)
  2 CITYNAME            (A50)
  2 COUNTRYNAME         (A50)
  2 LOCATION            (A50)
  2 TIME                (A50)
  2 WIND                (A50)
  2 VISIBILITY          (A50)
  2 SKYCONDITIONS       (A50)
  2 TEMPERATURE         (A50)
  2 DEWPOINT            (A50)
  2 PRESSURE            (A50)
  2 STATUS              (A50)

* ---------------------------------------------------------- XML-COPY-OF
1 #XML-COPY-OF          (A) DYNAMIC
1 #FIND-TAG             (A) DYNAMIC
1 #TAG-FROM             (A) DYNAMIC
1 #TAG-TO               (A) DYNAMIC
1 #TAG-END              (A) DYNAMIC
1 #AT_SIGN              (A1)
1 #APOSTROPHE                 (A1)
1 #QUOTATION_MARK                 (A1)
* ---------------------------------------------------------- XML-COPY-OF
INPUT // 'CityName...:' CITYNAME
       / 'CountryName:' COUNTRYNAME
      // 'Example: Tokyo, Japan; Heidelberg, Germany'
  '<?xml version="1.0" encoding="UTF-8" ?>' -
  '<SOAP-ENV:Envelope ' -
  'xmlns:SOAP-ENV="" ' -
  'xmlns:SOAP-ENC="" ' -
  'xmlns:xsi="" ' -
  'xmlns:xsd="">' -
  '<SOAP-ENV:Body>' -
  '<m:GetWeather xmlns:m="http://www.webserviceX.NET">' -
  '<m:CityName>' CITYNAME '</m:CityName>' -
  '<m:CountryName>' COUNTRYNAME '</m:CountryName>' -
  '</m:GetWeather>' -
  '</SOAP-ENV:Body>' -
  '</SOAP-ENV:Envelope>' INTO
* For more information about the service have a look at:
      NAME 'Request-Method' VALUE 'POST'
      NAME 'Content-Type' VALUE 'text/xml; encoding=utf-8'
      NAME 'SOAPAction' VALUE 'http://www.webserviceX.NET/GetWeather'
IF #RC = 200
* cut out the response
  #FIND-TAG := 'soap:Envelope/soap:Body/GetWeatherResponse/GetWeatherResult'
* parse the response
      VALUE 'CurrentWeather/Location/$'
      VALUE 'CurrentWeather/Time/$'
        #REPLY.TIME := #VALUE
      VALUE 'CurrentWeather/Wind/$'
        #REPLY.WIND := #VALUE
      VALUE 'CurrentWeather/Visibility/$'
      VALUE 'CurrentWeather/SkyConditions/$'
      VALUE 'CurrentWeather/Temperature/$'
      VALUE 'CurrentWeather/DewPointh/$'
      VALUE 'CurrentWeather/RelativeHumidity/$'
      VALUE 'CurrentWeather/Pressure/$'
      VALUE 'CurrentWeather/Status/$'

    'Location ............' #REPLY.LOCATION /
    'Time ................' #REPLY.TIME /
    'Wind ................' #REPLY.WIND /
    'Visibility ..........' #REPLY.VISIBILITY /
    'Sky Conditions ......' #REPLY.SKYCONDITIONS /
    'Temperature .........' #REPLY.TEMPERATURE /
    'Dew Point ...........' #REPLY.DEWPOINT /
    'Relative Humidity ...' #REPLY.RELATIVEHUMIDITY /
    'Pressure ............' #REPLY.PRESSURE /
    'Status ..............' #REPLY.STATUS /
* copy the content of a xml element
* works equal to a XSL copy-of
* for the mainframe
IF H'41' EQ "A" THEN
  #AT_SIGN := '@'
  #APOSTROPHE := H'27'
  #AT_SIGN := '?‚?§'
/*( end tag
*     remove false xml header
      COMPRESS '<?xml version="1.0" ?>'
/*( start tag
/*( content
        VALUE "?"
         COMPRESS FULL #XML-COPY-OF '<?' #NAME ' ' #VALUE '?>'
        VALUE #EXCLAMATION_MARK /* Comment.
        VALUE "C" /* CDATA section.
        VALUE "T" /* Starting tag.
        VALUE #AT_SIGN /* Attribute (or ?‚?§ on mainframes).
            COMPRESS FULL #XML-COPY-OF  ' ' #NAME '='
            COMPRESS FULL #XML-COPY-OF  ' ' #NAME '='
        VALUE "/" /* Closing tag.
          COMPRESS FULL #XML-COPY-OF '</' #NAME '>'
        NONE VALUE  /* $ Parsed data.
/*( none

Jun 27, 2012

Hints from the Grey Haired Programmers

I attended a major vendor conference this week.  This particular vendor has a a modern line of current modern products, and also a set of 2nd generation mainframe products.  They’ve wisely bought a series of smaller product companies over the past decade to ensure their future.

But the old product set continues to live on as well.  Yes they’re offering modern interfaces, web service enabling abilities and so forth.  And naturally they’re still investing in the original product set, as a cash cow should be milked as long as possible.

One of the speakers I wanted to hear was speaking in the older product set track, so I made my way to that conference area.  As I entered it a generational change occurred.  EVERYONE, and I mean everyone, was 55 years old or older – with the majority seemingly very very close to retirement.

There’s a clear hint from this sight for those using this product set.  We normally think of end of life for a technology as when either support ends or the supporting platform is discontinued.  But clearly there are a variety of applications and tools that are tremendously outdated but still in operation.

These products continue as companies have millions invested in their use, customizations, development of code, and all the factors that go into making a system of value to a business.

But while these products may continue to live, there does come a true end of life stage.  In the case I’m describing, it clearly seems the knowledge set of this product is literally leaving the industry as the people retire.

While many vendors are quite happy to extend the life of their older and even oldest products forever (for a high support fee), customers would be wise to look that the availability of skilled knowledgeable people to work with them.  When the skilled people are reaching the end of their working life, clearly the products have to be replaced.  Hopefully before skills availability diminishes to a severe level.

Jun 21, 2012

Are Relational Databases Dead?

I met with a major vendor technology evangelist recently (actually he called himself a technology space regional CTO) on the topic of noSQL and big data. 

Now I’ll admit, like any technologist and architect, I have my areas of specialty and areas of less specific knowledge.  And as the IT industry continues to develop new technologies and approaches yearly, it’s challenging to stay up to date.

So while I know relational databases well, can write good SQL and understand architecturally when to use database power functions like triggers, store procedures and the like, as well as database implementation level issues such as clustering, failover, performance, etc, I had not been paying attention to recent industry changes such as noSQL, graph databases and big data.

This tech evangelist presented the concept of a particular noSQL tool, emphatically stating that the day of the relational database was over.  Relational databases are dead as a future technology.  He even went so far as to predict a major decline in RDBMS use (in new projects) over the next 2 years.

(That’s how you know he’s an evangelist.  Even if a wonderful new tools that makes an older tech completely replaceable with a much better approach arrives, it still takes time to propagate across the industry.)

A recent project brought be face to face with noSQL type challenges.  Businesses are demanding more and more data relationships, more interconnectivity between objects, entities, data elements.  Suddenly businesses expect to build Facebook-like relationships into various parts of their business data, and expect to be able to find and present to users, data by walking the relationship chain.

Lets take an example:

A customer sends an email about a problem with his account.  The business wants to manage the customer relationship, so a link to the email object needs to be placed in the CRM system, as does an entry of what his requests were.  His requests set off a series of business processes to handle them, and each process wants to link to the entity (the original request in the CRM system) that triggered the process.  This allows the process to report “I was started due to…”.  But the inverse is also desired, the ability for the CRM system to look at the request and see the processes started (possibly in other systems) due to the request – so they can report back to the customer (if he calls or looks up the status of his request on a customer web portal) “we did or are doing …… because of your request”.

This bi-directional many-to-many relationship structure would be complicated and tricky to implement in a traditional relational database structure.  (It could be done, but would not maintain any referential integrity.)  But that assumes operating in 1 single database and one single database schema.

What happens when these relationships need to be maintained across modules, across applications and across systems?  When it’s the CRM system and a customer request that’s invoking actions in the Sales system and Billing system?

In other words, what happens when our data relationships are cross platform?

In the past we just threw customer keys at each other.  But when the points of the data relationships move to the tens or more, throwing around keys and indirect relationships is no longer viable.

This is where the noSQL (not only SQL) tools step in.  They are all about building and walking data relationships, and dynamically building data content and data access paths.

So much of our time with relational databases is spent on keys and indices (access paths) and relationships, especially when those relationships or access paths grow beyond 2 or 3.  The recent growth of the JPA Java standard is an attempt to partially resolve this problem (let Java generate it for you).

noSQL graph databases are a full resolution of this problem.

If you haven’t looked into them and you’re into architecture, system integration and/or database design, I suggest it’s time to do so. 

There is one failing though.  Relational databases are very good at what they do, and very well understood for it and it would seem overkill to try to supplant them with noSQL tools in their primary areas of strength.  Yet at the moment there are few access tools that offer a combined RDBMS and Graph-DB approach, where I could build a single query, traverse a graph node path to a relational table row.

Whether RDBMS has a future or not, I don’t know.  But the noSQL and graph database approach is clearly worth looking into for almost any new project.

Jan 15, 2012

Architecting the Software Development Team

I was recently deep in an architecture process when I was asked by a team member for some help in understanding the team operating model.  Or more specifically, a team leader and architect were asking me whether they should be coding a particularly difficult area of the system.

Being in architect mode, I immediately ran to the white board and architected the optimum software development team process.  Here it is…



What I was trying to describe was the role at each level, which is achieved through experience, and the flow between them.  So while an architect or team leader may be able to develop code at 3-5 times the rate of a Junior Programmer, everyone above can fill the roles below – but the reverse is not true.  So if that architect is programming, no one is architecting!

And while the architect might be able to code without an architecture or design (because he or she has an image of it in his or her mind), if he (or she) does so there is no architecture or design for other programmers that come along in the future and have to deal with that code (maintain it, extend it, interface with it, etc). 

When the senior people do this for base system components, they make a major team mistake that leaves everyone else struggling with their now undiagrammed undocumented features in the future.

It may get it done faster now, but it’s usually a mistake in the long run.

Nov 17, 2011

CloudCon – Funny Vendor Quotes

I’m sitting at CloudCon III – SaaSCon 2011.  It’s “a Cloud Conference with a focus on Software as a Service.”  Sadly the presentations are of limited value, with the same confusion I noted in my “Impressions” article (i.e. every IT business marketing department is trying to take advantage of it and rebrand their abilities “Cloud”).

While not of particular technical knowledge value, they to tend to result in humorous statements by the presenters…


“80% of Fortune 100 companies are using IBM cloud capabilities.”  Wow, you’ve got 80 customers?  Really?  (Those Fortune 100 companies, that spent from $200 million to $1 billion per year on IT costs, are generally using some of every capability of every major IT vendor.)


“4 million businesses have gone Google.”  As of 2007, the US Census Bureau reported there are 29,413,039 businesses in the U.S.  Assuming Google’s talking just about the U.S. (and I don’t think they were), that’s a 13% market penetration!  Wow!  (Not!)  If were were to take 2011 numbers and go worldwide, it might be 3% penetration.  Double wow!  (Double not!)

“We expected cloud email to be a growth industry and a challenge to Microsoft in the Enterprise.”  Chuckle.  This is your Cloud goal?  Email?

"Chromebooks – nothing but a browser, configured via the cloud, automatic upgrades, subscription model. Strong processor, wifi, 3G, battery lasts a full day. No hard drive.  Easy to replace a traditional laptop.  Happy IT managers and end users.”  Oh, and costs $499 in the US for a 12.1 inch netbook, $200 more than a Windows netbook with 1/2 the ability.  #Fail

“What the cloud offers: Enhanced Security”.  You’ve didn’t actually say this?  You couldn’t have actually said this!  Savings, definitely.  Ease of access to abilities, yes.  Flexibility, definitely.  Enhanced security, no way in h#ll.  If you’re going cloud you BETTER be spending A LOT more time layering on the security!


“Software as a Service, Cloud, Managed Services, Hosted Services, Outsourcing – we just change the name now and then to keep it fresh.”  Well that was refreshingly honest.

“Strength in depth.  A cloud based solution, a gateway based solution, a desktop based solution, all from different vendors.  It’s expensive, but when places are serious about security this is what they do.”  Is someone really talking straight?  So unusual not sure if I can handle it.

Impressions from a SaasCon – CloudCon

Header_stripI’m sitting at CloudCon III – SaaSCon 2011.  It’s marketed as a Cloud Conference with a focus on Software as a Service.  Here’s what I’m seeing…

a.  Computer hardware vendors selling small footprint office workstations.  It’s not a surprise that computer vendors for the office have finally decided to abandon the standard desk-drawer PC box for a cigar sized box.  (Anyone who opens a standard PC box will find the components would fit in a cigar box anyway, the rest is open space or fans.)  The surprise is it took so long and that they’re selling them as “cloud workstations” and spending their money trying to market them at a Cloud Conference.

b.  Network hardware vendors selling the next speed in network infrastructure, 10 gigabit.  Apparently since everything’s “in the cloud” you need yet more network bandwidth to get to it.  This is some nice marketing fluff since Cloud doesn’t increase your bandwidth needs, it just shifts it from inside your internal network to some external vendors as well – to which your network connections are inevitably slower simply due to WAN costs.  As far as the internal network goes, network storage devices, SOA and web services and heavy application infrastructure has already resulted in the massive network speed and capacity increases.  Not that I’d complain about deploying apps or integrations on a 10gbit network, it certainly makes non-local devices respond even more as if local.  But again, not new, not “cloudy”, just another marketing ploy.

Anyone notice I haven’t mentioned anything Software as a Service oriented yet?

c.  Consulting vendors.  “We’ve got Cloud experience and expertise.”  Sure you do.  Reminds me when early in my career I was seeing advertisements for “5 years of Visual C++ programming experience” when Visual C++ had only been general release for 1 year.  On the serious side, I did have a conversation with a major consulting vendor division VP who told me they have started recommending the use of some “private cloud” resources – which they translate to mean some storage or computing resources hosted in the consulting vendor’s data center.  So for some consulting vendors Cloud means outsourcing a customer’s storage or computing requirements to the vendor data center and/or hosting the customer’s applications for them in the consulting vendor data center.

d.  Utility service vendors.  Symantec “virus protection as a cloud service”, somebody offering Fax as a Cloud service (there’s something incredibly weird about offering a 1980’s technology as a 2010’s cloud ability), central Email management as a Cloud service, and Telephony as a Cloud service.  The last one is kind of interesting though again not what I think of when I think Software as a Service.  Since the PBX moved to VOIP (voice over IP) and the office phone handsets moved to TCP/IP network connected digital devices (the less technical may not have noticed over the last 5 years their office phone moved from being connected to a phone wire to being connected to the office network), it makes sense you could move the PBX to a Cloud service.

e. The Big Vendors.  Did you know IBM offers cloud services?  IBM Smart Cloud!  It’s IBM, it’s Cloud, it’s Smart.  Marketing at it’s best.  Not much to actually say or show beyond “we’ve got lots of data centers and cloud offerings around the world”.  Ok, we know it’s IBM and they’ll (probably) make just about anything you want work…if you’ve got money and time.

f.  Data center hosting vendors.  They can host your servers, they can virtualize for you, they can host your storage, your backups, your network services…oh and by the way they’ll host your Private Cloud (which for them is just your collection of servers in their data center).  A minor twist on what they already do.

f.  Far off in a little corner by themselves were a few real Software as a Service vendors.  A CRM vendor, a Project Management vendor, real life Software as a Service vendors offering their applications and their abilities and various pricing models.

Net net, it tells me that Cloud and Software as a Service remains a confusing poorly defined poorly understood tech space.  Every IT business marketing department is trying to take advantage of it and rebrand their abilities “Cloud”. 

But like the hype cycle for SOA, many try but not that many are offering actual value in the space.  The Cloud and Software as a Service market has a lot of growing to do and maturity to gain before it stabilizes.  There’s definitely value to be gained right now, but also the possibility to be taken by ridiculous claims and expensive products and services offering marginal value.

Nov 6, 2011

Batch Out to Web Services?

Calling web services from the mainframe has become a frequent question.  But as applications (and data) may be migrated off the mainframe to apps now hosted on Linux or Windows servers, the old trustworthy batch jobs may suddenly need to access remote systems and web services to do their job.

Here’s how one person phrased their problem…

We are currently looking at doing a partial migration away from a MainFrame.  Some of the functionality written in Mainframe Cobol and is called from Mainframe Batch programs.  We would like to move these cobol programs off the mainframe.  Question - If we moved the functionality in the cobol programme to a Java or .Net web service, is the a way to call this web service from a Mainframe batch programme?

Technically this is an easy answer.  Yes, web services can be invoked from the mainframe.  They can even be directly invoked from CICS and from IBM Enterprise COBOL (as of CICS TS 3.1).  There are some technical limitations to this, Enterprise COBOL web services cannot deal with complicated XML structures and all XML data types – which can make it a challenge to call pre-defined web services with modern standards.  But if the web services are being created directly to service the Enterprise COBOL call, no problem (technically speaking).

Architecturally, this type of batch web service invocation does have a major flaw.  Anyone doing batch programming knows that database commits can cause significant performance problems for batch, and therefore careful management of the database commits (and other database activities) are part of every batch implementation (commonly commits are only done every 100 transactions or more).

Similarly, every web service invocation has an overhead cost.  Multiply this by tens of thousands or hundreds of thousands of transactions and your batch process will spend most of it’s time waiting to make the web service connection.  And that time may run to hours or more.

The solution is similar to the database commit approach.  The web service must be designed to pass multiple transaction requests through a single invocation.  The communication connection is made and an array of transaction requests (in mainframe COBOL speak) or a list of SOAP documents (in web service speak) are transmitted during the connection. 

Naturally the receiving web service must be designed to handle multiple transaction requests in a single invocation, and practically this is not a problem in any modern environment (such as Java or .Net).  It is an unusual pattern that most don’t consider, but even in most normal circumstances there is no reason that a web service shouldn’t handle multiple transactions included in a single request body or multiple request bodies in a single communication instance.

This is the only HTTP SOAP based approach to this problem.  Other alternatives include queuing, loading all the requests into a messaging system (such as IBM Websphere MQ Series), with the processing system having a reasonably large thread pool for pulling and processing the messages as they arrive.

These ‘mixed environment’ batches are already very common, and many organizations have no intention to move away from the ‘large processing job’ approach.  As resources spread even farther and into the cloud, this problem will grow ever more ‘interesting’.

Oct 24, 2011

What Cloud, Which Cloud, Where Cloud?

Cloud Computing has strongly entered it’s hype cycle.  Just as everyone ran to relabel everything Service Oriented and ESB-this or that, now everything is being relabled Cloud this or that. 

As soon as that happens we enter the technology confusion cycle.  (Sometimes one thinks this is intentional on the part of vendors, so you can’t tell exactly where their product fits or where it lacks.)

Let’s see if we can do a little bit of Cloud Clarification™…

-> Cloud Computing is about pushing applications, components, modules, abilities to an on-demand model with remote capacity.

-> Software as a Service is about renting software abilities via a vendor exposing abilities, modules, business processes for remote use.

-> Cloud Infrastructure, which is also being called Cloud Computing, is about renting remote computing/hardware capacity on demand and in fractional increments.

About the best picture I’ve seen describing this is here… (though there’s some details in it I’d quibble about)



Today most Cloud ‘things’ being sold or used are Cloud Infrastructure, meaning remote storage or remote computing capacity.  What makes them different from just renting a server from a hosting vendor somewhere is it’s usually available on demand (renting a server, whether physical or virtual, usually involves some wait and setup time as the server is prepared for you, which includes installing or allocating the right amount of memory and disk) and is charged in fractional increments.  For example, Amazon’s S3 storage service charges per gigabyte per hour. 

The other big Cloud activity is Software as a Service (SaaS).  SaaS is making serious inroads in major IT shops as and others are making an excellent case for simply using their software remotely, with a lower cost than a major CRM, ERP or accounting software purchase plus associated installation, administration, and server costs required for a normal software install.  And not to forget ongoing maintenance costs, support, and high availability redundancy.  Most major software vendors are preparing Software as a Service editions of their major application products.  CA, SAP, Peoplesoft, BMC… has proven the SaaS model and the majors are trying to follow (not always an easy task with many major application products having old software bases.)

Real “Cloud Computing”, the ability to dynamically deploy some code or modules or components or services to on-demand container environments has not yet had major penetration.  It’s understandable as the complexity is much higher and has to be coordinated with development environments and tools.  Many start-ups are trying to create the right combination, and I’m sure we’ll see increasing traction here soon.

So the question remains,

• Will I be able to mix and match business processes and capabilities from multiple vendors business process (formerly application) portfolios?

• Can I “deploy” my integrated orchestrated capabilities in an on-demand environment?

• Will it let me gain a strategic business advantage by creating unique processes exactly matching my business goals?

• Will I be using and only paying for the exact capacity I need?

• Will I be able to change my processes quickly and easily as market conditions change?

Not yet, but to some of them there is a partial yes, and all of them are in sight.

Jul 27, 2011

CICS Web Service Compatibility

IBM has done significant work to allow mainframe based applications to expose and consume web services.  They’ve particularly targeted CICS and languages COBOL, PL/I and C++. 

While many vendors (including IBM) offer a variety of tools to provide easy web service bridging, IBM’s CICS efforts offer a direct path without loading and managing additional utilities.  There was concern in the past of the CPU load this added to CICS, but IBM handled those problems over the years with the current edition having good performance with reasonable overhead.

While IBM recommends using modern development tools such as their Rational Application Developer for Z/Os (RD for Z) to automatically generate and build the binding and WSDL’s necessary for a service, their CICS command line based utility is probably used by most that do so (this being DFHLS2WS).  With a short series of configuration sessions (and limited options), it will take program information and data areas and generate appropriate WSDL and binding files.

Which is where it gets interesting.

IBM’s mainframe folks have worked hard to generate a very exacting and 100% compliant WSDL while dealing with the difficult aspects of fixed length fields, fixed array quantities, EBCIDIC to UTF-8 translations, and unique language storage models (such as COBOL’s COMP-3 packed decimal field).  For the most part they’ve done a good job of mapping 2nd generation languages’ internal storage models to XML and the latest schema/XSD capabilities.  But in a few areas they’ve seriously fallen down.

1. Arrays.  Empty arrays are sent through in XML as…repetitive empty copies of the field/tag.  So if in COBOL you have a 05 PHONE-NUMBERS PIC X(9) OCCURS 50 (that’s an array of fields called phone-numbers of 9 characters), in XML you get <PHONE-NUMBERS> repeated 50 times, with no values if empty but still there.

2. There’s sophisticated namespace usage by IBM in the resulting WSDL, with the WSDL having one namespace, the request tags having another and the response tags having a third.  At the start of each section the DFHLS2WS utility names an XSD complex type “ProgramInterface”.  Before doing that they change the namespace, but to NOT tag the names in the section with the namespace.  The strict rules of XSD’s say this is acceptable.  BUT when .Net programmers import the CICS generated WSDL into Visual Studio or Java programmers import it into Eclipse (including IBM’s Rational Application Developer for Websphere), these tools reject the WSDL due to a duplicate name (the repeating of ProgramInterface without the namespace pre-pended but after the namespace change).

Technically, that’s a bug in Visual Studio, Eclipse and Rational Application Developer, as those tools aren’t handling the namespace change and implicitly placing the names in that section in the namespace.

Practically it means IBM’s CICS team outsmarted themselves with the XSD sophistication of the output of DFHLS2WS, using a feature that’s not well supported by the developers of the development tools that will be importing the resulting WSDL.

Interoperability standards are critical.  But vendor interoperability isn’t perfect.  While we shouldn’t have to go to the lowest common denominator, going to the highest isn’t wise.

[ There is no solution for this particular DFHLS2WS problem beyond writing a utility to automatically modify the output WSDL, or manually modifying it.  Maddeningly the IBM documentation even says it is likely to need to be modified rather than offering options or flexibility! ]

Jul 4, 2011

Cloud Computing, As a Service, and Taxes

In the past months a number of articles were published about Cloud Computing and taxes.  As a techie, choosing a software vendor on the basis of taxes may not be something considered.  But, depending on the jurisdiction, software purchases can be charged sales taxes, service taxes, or value added taxes.  While one might think ‘rented software’ isn’t a purchase and can’t be charged a sales tax, in some places it may be considered a capital acquisition (and taxes as a sale) while in others it could be charged a valued added tax (which taxes services as well as sales in European countries and Israel) or might be subject to various business taxes (such as a franchise tax in Texas where every company’s data on the Cloud is considered a local franchise and taxes as such.)

First an important definition of terms, because almost EVERY “Cloud Computing will be Taxed” article I read got it absolutely WRONG…

Cloud Computing – renting computing resources from a remote vendor on the basis of computing units rather than physical equipment.  For example, I need web serving ability that can handle a site taking 100,000 visits an hour and storage for 5,000 high resolution photos.  I do not rent servers, computers, or hard disks, I rent “capacity” and often pay for capacity x price-per-capacity-unit per day of use.

When using Cloud Computing resources, I do not know where the resources I’m using are or what physical equipment is involved.  I just need some “capacity” to “run my stuff”.

Software / Platform / Integration / Infrastructure AS A Service – renting use of particular software capabilities with the software operating at the remote vendor.  For example, I need CRM (Customer Resource Management) functions for my business, so I decide to use which provides me a series of software-package abilities that I access remotely running somewhere on their servers (or in their Cloud).  I do not rent servers and install their software, I rent “software capacity” measured in software usage units (per day / month / or year), such as number of users from my office + number of customer records I store in their software x price-per-software-unit x amount of time used.

Cloud Computing is something IT people might talk about and use, particularly infrastructure people.  …As a Service is something the business people might talk about, such as “should we buy SAP CRM or use Software as a Service”? 

As states and countries are becoming desperate for taxes, all kinds of weird tax schemes are being extended to cover data centers and software services.

The whole point of Cloud Computing and Software (etc.) as a Service is that the user (the company using the services) do not need to worry about where the supporting equipment is, what it is, is it secure / backed up / managed / etc.  All the physical operational details are managed by the Cloud or As a Service vendor.

Even further, the vendors themselves are expected to managed and balance their physical capacity, so from where your service is being provided, upon what your service is running and where your data is located may change as the vendor rebalances their customer load.

However, if we factor taxes into the picture it’s possible using software in a location or having your data in a location could subject you to local taxes.  Which means these questions suddenly apply (which are the opposite of the point and goals of Cloud Computing and As a Service)…

  • In what state / country / jurisdiction is the service user (company) located?
  • In what state / country / jurisdiction is the service provider located?

  • When companies are multi-state and multi-national, those questions can be even more complicated.

  • In what state / country / jurisdiction is the computing power located (where’s it running, or where’s the data center it’s running within)?
  • In what state / country / jurisdiction is the data storage located?

  • You can’t just ask “where’s the server”, as the server may be virtualized or relocated depending on capacity requirements at any given time.  With current Cloud Computing technology, it’s probably staying within 1 physical data center though.

  • Does the service provider have a physical location in the state / country / jurisdiction where the service user is located?
  • (This one’s extra complicated.)  Is the service user using the provided service for their own business or reselling the provided service in some way?  (As an example, photo hosting site SmugMug uses Amazon S3 cloud storage service to store people’s photos.)
  • So now when considering Cloud or As a Service, one must also consider the tax implications in the ‘buyer’s’ jurisdiction, the ‘seller’s’ jurisdiction, the jurisdiction of the data center providing the computing power and the jurisdiction of the data center providing storage capacity (if different from the computing power).

    As noted above, Texas (somewhat accidentally) tries to charge a business franchise tax if Texas based data centers are providing your Cloud or As a Service services (the assumption being your computing or data in their state is a physical presence in their state – which is why Cloud vendors all left Texas).

    Similarly California recently tried to tax by stating local people in your affiliate program qualifies as a local office (to which Amazon responded by cancelling their affiliate program for anyone with a California address).

    To date the Cloud vendors are solving the problem by avoiding jurisdictions where such taxes might have impact.  But in major deals it’s appropriate to involve your lawyers and accountants to verify contract and local law details that may apply.

    Much internet growth was fueled by the opportunity to avoid brick-and-mortar taxes.  Let’s hope politicians don’t overly tax the Cloud market before it has a chance to develop it’s advantages and provide a business base worth taxing.

    Jun 27, 2011

    Basic Enterprise Web Service Security Concepts

    In the (near) past, security was handled by the user interface.  The user interface acted as the sole entry point to the application, and therefore all application security was oriented around user permissions.

    Added web services is like having great locks on your front door but opening all the windows in your house.  Lots of entry points, each of which needs security.

    There’s a few basic enterprise web service security concepts that need to be understood to understand web service security.

    Web service security may operate from a user context, an application context, or both.

    User Context: Application 1 includes in the (web) service request to application 2 information about the user who performed an action causing the request. Application 2 then decides if the service is permitted based on the user requesting it in application 1.

    This requires applications 1 and 2 to have a common user security framework (application 2 has to recognize application 1’s user and be able to check if that user is authorized to request the service operation being requested.)

    User Validation – How can application 2 know that the user sent by application 1 has been validated by application 1? One answer would be to send through the user’s password, but application 1 rarely has access to the password (as it may be under the control of an external security system such as Microsoft Active Directory), and sending a password in a message has it’s own security risks.

    A solution frequently select is Single Sign On software. This is integrated into both applications 1 and 2, and when the user logs in gives the application a “user token” instead of user information. This user token can then be passed in the message, and application 2 can simply ask the Single Sign On utility if the user token is valid and active (is the user still logged in).

    If applications 1 and 2 have no common user context, no shared user base or shared security source, then user context security can’t be used.  Rather, the best that can be done is application 1 can pass along the name or ID of the user who performed a function resulting in the web service request, and application 2 can store it (for logging or auditing purposes), but can’t check any sort of permissions (as the user is unknown to application 2).

    Application Context: Is application 1 allowed to activate a particular service in application 2? Is application 1’s test environment allowed to activate that service in application 2’s production environment? (probably not.)

    Application context is about whether the source of the request (the source from a program / code / environment perspective) is allowed to request the action being asked from the destination (program and environment).

    Enforcement: Some ESB’s (Enterprise Service Buses) have internal features to enforce some of this type of security. (Some require add on modules.) However, even if the ESB is enforcing this type of security, the end points of the requests (the service providing systems) must also be protected and have service security enforcement. Otherwise, what is to stop a developer (or hacker that gets into the internal network) from directly accessing a product business web service from a workstation or laptop? (Nothing.) Further, services are intentionally designed to be easy to use and understand (therefore security through obscurity may no longer help.)

    Complete enforcement is best done using SOA security tools. These will either include an agent on each end point or route all services through security enforcement gateways (with the end points only accepting requests via the gateways).  It is possible to create your own security enforcement function in front of services (such as with IBM Websphere where a “handler” can be inserted in the Web Service engine), but is generally not recommended (as you would have to recreate it for each technology exposing services – which the vendor’s provide.)

    Agent Based Security Model



    Gateway Based Service Security Model


    Jun 21, 2011

    A Code Weapon


    Stuxnet: Anatomy of a Computer Virus

    Jun 7, 2011

    Early Signs of SOA Success

    successI’ve been working with a client for an extended period of time.  This large IT department has had a variety of SOA tools and technologies available and has been doing major systems integration for 10 years.  Yet while their SOA tools have allowed them to integrate quicker than manual development, their integration methodology (essentially none) has given them 0% reuse.

    Reuse is a fine objective, but it may not actually be valuable depending on the business and IT organization goals.  In this client’s case we did an extensive evaluation of IT current state, IT future state plans and goals, and business goals.  That may sound like a lot of overhead to determine future state integration and SOA approaches, but in the current economic climate architecture for architecture’s sake is simply not acceptable (if it ever was).

    Or to put it another way, when IT is aligned with and demonstrating direct business value then IT is valued by the business.  And this attitude has to filter down to enterprise architecture, integration and SOA.

    This is not to justify SOA (service oriented architecture).  Rather, SOA must justify it’s overhead by demonstrating how it’s going to provide value in meeting the IT and business goals.

    At this client we identified 3 primary business and IT drivers for integration:

    1. Business and IT systems agility.  This client is in a dynamic business environment and is frequently reinventing parts of their business, leading to an unusually high volume of major application replacement and feature revamping.

    2. Reliability.  As the complexity of the interconnections between systems and applications had been increasing, reliability was suffering, sometimes with real dollar measurable business impact of downtime or data loss.  (Correspondingly more and better people were needed for support as more time was spend on more complicated problems.)

    3. Integration Cost Reduction.  Integration (and integration support) were taking higher and higher percentages of project budgets and the trend continues to grow.

    As I noted at the beginning of the article, I’ve been working with this client for some time.  Meeting this client’s goals is mostly about IT process changing and IT thinking change (though some solutions can be met with select tools for certain parts of the problem), for which we’ve been planning and preparing and planting the seeds as we’re reviewing projects in progress before the new processes are complete.

    This week I saw in the organization the first signs of real SOA success.  I was sitting with an integration architect who was describing how he just saved 75% of the integration of 3 projects because we designed the services used by the first project in a reusable pattern. 

    And that’s how it starts.

    The goal is not reuse, the goal is aligning IT to meet the business goals.  Reuse is a method.  And seeing my client beginning to have success with the method and start meeting their goals…that’s exciting!

    (Now we have to quickly put in place the KPI’s [key performance indicators] to measure the success and report it to all levels of IT management.  That’s the way to reinforce the positive people pattern and get the integration people positive recognition.)

    May 1, 2011

    Unionized IT and SOA

    Labor unions are rarely found in IT organizations.  It’s not unheard of, but generally high pay rates and frequent job mobility have made labor or trade unions appear to be of limited benefit to the employee – and therefore rejected.

    In general labor unions impose rules on the management that require employees with the greatest seniority (most time at the company) be promoted to more senior positions as they open up.  And they require new employees to be brought in at the most junior level.

    IT, with frequently changing technologies, requires bringing in subject matter experts and promoting those demonstrating top technical skills to technical leadership positions.

    Recently I’ve been doing some consulting on a large scale IT project, which involves quite a bit of Service Oriented Architecture and involves a unionized IT department.  They’re struggling both with the technical aspects of a major technology and architectural approach change and with the union job impacts of such.

    In particular, like the classical unionized worker, much of IT is certain that it’s doing it’s job function in “the right way”.  It knows this because it’s the documented union approved procedure, and therefore the “right” one.

    Similarly because of this the existing environment has been extremely slow to adopt new technology or new methods, as each such change requires negotiations with the union.  This has led to much of the IT operations literally remaining as green screen mainframe applications written in a standard 2nd generation programming language.

    As an interesting aside, I heard that one of the union contract terms for pay includes a multiplier based on the CPU (mips) capacity of the mainframe.  (I guess the assumption would be if the “computer” is doing more then either the workers are doing more or their work is more valuable.)  This has literally led to increased computing power being delayed due to the impact on labor costs (and therefore maintaining slower application response time across the whole work force).

    It’s hard to see how one can operate within union work rules and have an agile and integrated IT environment.  Perhaps there are agile unions that could make such a thing possible, but traditional union patterns do not seem compatible with agile IT.

    Jan 9, 2011

    Instant Realtime BI with SOA BI



    BI, Business Intelligence, has taken hold at almost every mid-size or larger IT organization. 

    It commonly means extracting key data elements from all the main systems and databases in the organization and compiling it all together in the Business Intelligence Data Warehouse.  And the primary method for doing this is ETL – extract, transform, and load.  Basically meaning batch-style data loads performed daily, weekly, or monthly (from the source systems to the data warehouse).

    Setting it up is expensive and time consuming as it requires building a large capacity database and ETL processes for every important data source in the company.  The ETL processes by themselves are often not enough as data duplication and data quality problems quickly float to the surface and have to be resolved to a sufficient level to continue (resolved in the data warehouse, not in the data sources).

    However, it’s relatively easy to demonstrate the business value of the resulting Business Intelligence Center, as that cross-system data provides business process statistics, results, and reporting as none of the systems can standing alone.  Further, intensive data mining can be performed that can’t be applied against the source systems (either because they’re operating live and can’t handle that depth of data access or because the value points come out by the combinations across systems and complete business processes.)

    Most businesses that implement BI consider it a win and see their ROI.

    Now as I’ve been implementing a new enterprise SOA strategy at a major customer, the BI team arrived in an architecture strategy presentation meeting and asked “ok, how’s this (an enterprise SOA strategy including common entities and processes) different from BI?”

    In essence they’re pointing out that they already have integrated to everything (or at least every important data source), they’ve already translated between all those disparate data models to a single enterprise model (the structure of their data warehouse), and they’ve already handled the cross-system conflicts of object model conflicts and different representations of the same data. 

    The primary differences between BI ETL integrations and SOA integrations is small.  One, BI is a timed infrequent (daily or less) data extract, SOA is a real time integration.  Two, BI tends to work in a true ETL model…extract the data in the source system’s format, transform it to the data warehouse’s preferred format, and load it into the data warehouse.  SOA, when it moves beyond a point to point connectivity technology does several transformations, first from the source format to XML, second from the source data model to a company or industry standard, and third for legacy situations from the company standard to the destination system’s requirements (though this need fades with time if a company standard is selected). 

    But conceptually BI and SOA integrations are doing much of the same thing!

    Now BI is being faced with a new requirement…realtime.  Companies are seeing such a value in BI that they’re asking why they can’t get the reports and analyses RIGHT NOW (rather than tomorrow or next week).

    The SOA vendors offer a solution to part of this desire with their BAM – Business Activity Monitoring tools.  These tools (examples include IBM Websphere Business Monitor and Software AG Webmethods Optimize) monitor data elements passing through web service requests and use it to build a real time image of what’s happening – an image that can be identical to the BI image generated 24 hours later through summarized data reporting.

    However, the integration teams and Integration Competency Centers have generally been unsuccessful at selling this ability to their business users.  This isn’t because it doesn’t solve the problem but rather because the business users of BI and integration are different.  BI is actually being used by key business users.  Integration is generally service the IT executives as their “customer”, and therefore BAM doesn’t fit in with their normal “offerings”.

    Realtime BI has become one of the “hot” IT topics for 2011.  There’s two relatively easy solutions to help BI become realtime, though they’re somewhat problematic politically as they violate current organization structures at many IT shops.

    Solution #1 – Get BAM SOA tools but give it to the BI team to use as part of the offerings to their data needing business customers.

    Solution #2 – IT shops with a good library of existing connections and services can begin to echo certain update services directly to the data warehouse and BI team for realtime handling.  (Realtime in this case means sent to a queue, the realtime update processes can’t stop and wait for the data warehouse to process the updates.)

    In both cases it requires the BI teams to begin leveraging the integration teams infrastructure.  However, the connection can bring BI the realtime options it needs with minimal effort.

    Other models, such as Change Data Capture utilities, are great for increasing vendor software sales (and they do work and are impressive tools) but aren’t necessary… IF we can get two disciplines with somewhat different historical goals to begin working together.

    Those that do will get a big and relatively inexpensive win.

    Blog Widget by LinkWithin