Jul 11, 2016

Amazon S3 Easy Scripted Backup from Windows for the Enterprise

I don’t normally post code, nor am I normally implementing scripts myself.  But sometimes to learn the ins and outs of a capability, you have to dive in and try it out.  Since I’ve been working with Amazon Web Services (AWS) via Windows, I’ve found a remarkable lack of sample scripts for using it, so I’m posing my little project here.

For me, with heavy Unix scripting experience in my distant background, using Powershell and the AWS Powershell add-in was a no brainer.  While the syntax of Powershell is significantly different from Unix Bourne shell, the capabilities are practically identical, including piping.

Now for the requirements:

- I needed to backup to the Cloud for an offsite backup.
- The data needed to be encrypted with a client-managed key, but I had neither the tools nor onsite CPU or extra storage for client-site encrypting.
- The backed-up data needed to track changes or access.
- To show any modifications, the backed-up files needed to manage versions – so no changed version would overwrite a previous version.

Here’s what I did…

1. Set up a specific IAM user with permissions only for S3 by:

- Created an AWS group titled “backup_group”.
- Attached the policy “AmazonS3FullAccess” and no others.
- Created users “backup_user1” & “backup_user2”.
- Stored these user’s REST access keys in a secure encrypted local location.
- Added both users to “backup_group”.

2. Create a specific S3 bucket(s) for these backups, accessible only by the Backup user (and administrative user), with logging.

- Created S3 bucket backup-logs with lifecycle setting, DELETE all content 2 years old or older – meaning logs have a 2 year life. (If you don’t give log directories a lifecycle rule, they’ll accumulate forever with ever increasing storage costs.  Since this “disk” never “fills”, you won’t get any kind of log error that would force rotation, just an ever growing cost.)

- Created S3 bucket “backup-for-bi”, enabled logging to “backup-logs” with subdirectory logs-bi/
- Created S3 bucket “backup-for-DB”, enabled logging to “backup-logs” to subdirectory logs-db/

3. Enable versioning to preserve each copy and prevent hidden changes. – enabled on both buckets.

6. Utilize an upload command setting that requires encryption of the uploaded data with an client managed key, which will prevent any unencrypted download of the content even by Amazon.  Key will be stored locally.

- Now this was particularly tricky and confusing, encryption keys not being my specialty.  AWS’s data encryption is AES 256, but generating an AES 256 key was rejected.  The command spec says it should be Base64 encoded, which I did but it was still rejected.  In the end I was able to generate an AES-128-cbc encrypt key from a passcode, and then Base64 encode that key which generated a 44 byte string (ending with =) that AWS accepted.  In essence, that Base64 string is, as far as we are concerned, the key – though I’m storing that key, the original password, and the 128 bit key and salt.

With all that prep ready, here’s the Powershell to upload a list of directories, the list being embedded in the shell.  $accesskey equals the AWS IAM user access key shown by AWS on creation of the user.  $secretkey is also shown by AWS on creation of the user.

    Upload Listed Directories to Amazon AWS S3
    1) AWS Tools for PowerShell from http://console.aws.amazon.com/powershell/

    powershell.exe .\AWS_Backup_Dirs.ps1  

$bucket          = "nameofbackupbucket"
$backup_list     = "E:\Prod", "E:\PreProd"
$AES256_key      = "AAAABBBBCCCCDDDDEEEE99991111222233334444777="
$accesskey       = "ASDKLJASDFJKLASDF"

    import-module "C:\Program Files (x86)\AWS Tools\PowerShell\AWSPowerShell\AWSPowerShell.psd1"
catch [system.exception] 
    $error_fail = "Error: AWS Powershell Extensions not installed or missing from expected location... " + $_.Exception.Message
    Write-Host $error_fail
    throw $error_fail

foreach ($backme in $backup_list)
    $bucket_subdir = Split-Path -Path $backme -Leaf
        Write-S3Object -BucketName $bucket -Folder $backme -Recurse -KeyPrefix $bucket_subdir -AccessKey $accesskey -SecretKey $secretkey -ServerSideEncryptionCustomerProvidedKey $AES256_key -ServerSideEncryptionCustomerMethod AES256
    catch [system.exception] 
        Write-Host  "Error: " $_.Exception.Message


The –KeyPrefix specifies that the data will be written into subdirectories matching the last name of the directory path.  So if the path is D:\dog\cat, it will be stored on S3 in the subdir “cat”.

This script can be set up as a scheduled task and run daily or weekly.  BUT, as written it will send up the whole directory, which can bear the cost of the full data transfer even if nothing has changed.  If you want incremental backups, you have to adjust the script to only find newer files, and loop to send them up one at a time (rather than the whole directory in this script).

Hope this helps!

Jun 6, 2016

Cloud Flexibility encounters IT Procurement Inflexibility

imageThe Cloud.  Whether it’s a mega-cloud provider in the public clouds, a private or managed cloud, on premises or off prem, cloud is all about flexibility.  Add an instance, add a service, it’s just a click.

(Note the dynamic cloud, the ability of an app to dynamically expand it’s resources as load increases, remains mostly hype.  While this was one of the first great promises of the cloud-o-sphere, it has not translated into reality.  Even further, just shutting down non-production environments during the night or on a schedule can be a significant cost saver – but is not offered by the mega-providers.  At least in this area 3rd party cloud support vendors have stepped in – but many are not aware of this and end up with lots of idle time on compute nodes.)

While the cloud providers offer that wonderful relatively instant service with a click, each one of those clicks carry a cost.  And cost means…IT procurement, the department focused on getting the best deal for their IT dollar and making those long term contracts that keep us operational.  And they have a process, a long process, for each item to be purchased.

Cloud flexibility means I can just add a node or VM, and add a backup or DB or firewall.  IT procurement means forms and weeks and reviews.

When we were purchasing compute capacity for the project for the year, which consisted of a series of servers and expensive software licenses, this made sense.  My purchase had significant cost and long term implications.

With Cloud accounts, my “purchase” can be “unpurchased” at any time (at least in the public clouds – private clouds often require some time commitment), and it can start small and grow as the capacity need grows.  (In traditional IT, “new projects” often purchased their first 2 years of servers in their initial purchase – that’s how it was done, nobody wanted their project to be in trouble due to insufficient resources, which often meant over purchasing, and until the project was in production under actual production load they were often unclear of the actual server capacity that would be needed beyond a high level guesstimate…that could easily be 50% off.)  With Cloud, we can start small and increase capacity with ease as the actual usage grows.

Obviously a significant positive NOT to have to buy capacity we’re not using today and may not use for 18 months.  AND we can increase to the ACTUAL need rather than a predicted need 2 years ago.  BUT…if we have to go through a full IT procurement process as we make those Cloud changes, we’re hobbled and unable to gain that value.

This is not theoretical.  Try having a cloud vendor procedure a Purchase Order and Statement of Work for “Add Managed Backup to Node, $129.95 per month” and “Storage Encryption Service, $39.95 per month” – the experience isn’t pleasant for anyone and makes Cloud use somewhat impractical UNLESS we return to the old approach of over-estimate everything we need and buy (allocate all those high volume nodes and services) up front.

It’s easy to overuse, over allocate, and not manage the cloud resources (not release resources and services no longer in active use), and IT procurement can provide necessary oversight and management of cloud resources.  But they have to come with new cloud flexible procedures to do so.

Otherwise, the value of cloud services is lost.

Nov 25, 2015

Continuous Integration vs. Micro-Services


I was reading Mike Kavis’s Do This, Not That: 7 Ways to Think Different in the Cloud and encountered what initially sounds like reasonable advice…but becomes absolutely unrealistic.  I’ll explain why at the end.  Mike writes…

Think Empower, Not Control

The first thing many companies do when they start their cloud initiative is figure out how to lock it down. Too often, the people who own security and governance spend months (sometimes years) trying to figure out how to apply controls necessary to meet their security and regulatory requirements. Meanwhile, developers are not allowed to use the platform or worse yet, they whip out their credit card and build unsecured and ungoverned solutions in shadow clouds.

We need to shift our thinking from “how can we prevent developers from screwing up” to “how can we empower developers” by providing security and governance services that are inherited as cloud resources are consumed. To do this, we need to get out of our silos and work collaboratively. Instead of enforcing security and governance controls by requiring rigorous reviews, we need to bake policies and best practices into the SDLC.

Start with continuous integration (CI). Automate the build process and insert code scans that enforce coding best practices, security policies, and cloud architecture best practices. Fail the build if the code does not meet the appropriate policy requirements. Let the developers police themselves by using automation that relies on policies established by the security, governance and architecture teams. Set the policies and then get out of the way, letting the build process do the enforcement. Developers will get fast feedback from the CI process and quickly fix any compliance issues – they need to or the build will never get to production.

Once applications are deployed, run continuous monitoring tools that look for violations or vulnerabilities. Here’s a novel idea: Replace meetings with tools that provide real time feedback.

Ahh, the magic of the perfect Software Development Life Cycle empowered by the perfect DevOps continuous integration.  That will surely solve the problems of breaking down silos and collaborative working, right?

Sadly I’ve yet to encounter a technology or methodology that magically restructures the IT organization.  The IT organization structure has come into place due to business, IT management, and cultural drivers of the organization.  There are technology changes that have forced restructuring, but it’s always painful and time consuming.  Breaking silos is one of the hardest (as this usually challenges political control and authority structures – and people are loath to give up control / authority / influence.)

Mike focuses on the second part and skips a key idea he presents in the first part.  Namely, give the developers the environment that’s the target.  Or, expanding on this idea… ARCHITECT your systems for their target environment.  Systems architected for cloud or hybrid scenarios (should it matter to the application if it’s deployed locally, locally on dedicated servers, VM’s, soft partitions, private cloud, public cloud…or a mix of all the above?)

Systems can easily be architected for distributed deployment…IF that thought goes into the requirements early on.  This may mean messaging and/or event driven architecture instead of real-time web services, even layering internal components to pass events instead of locally calling or instantiating.  By doing so the architecture becomes micro-service oriented, which at the macro level means component groups can be bundled into deployment packs and deployed across the various server / resource models as needed.

It’s not can we break down the silos between people and teams – which may be appropriate but long and painful and STILL result in unwieldy systems.  It’s how we model the interaction between and within the silos that will give the flexibility to deploy anywhere and coordinate/communicate/integrate practically automatically.

Sep 16, 2015

Micro-Services vs. Business Service Granularity


In the many discussions and architecture approaches I’m reading about Micro-Services, a key point seems to elude the conversation.  Namely, “What’s a Micro-Service?”  Not what are the technical properties of a micro-service, but rather what level of BUSINESS functionality should be encapsulated in a single service, a single “micro” service?

One “formal” definition that’s floating around is this…

...an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies. - James Lewis and Martin Fowler via http://techbeacon.com/ 

From the SOA (service oriented architecture) perspective, decomposing an application into it’s composite transactions and business objects, and wrapping them in accessible services, has been the trend for the past 15 years or so.  Applications have been turning into business engines, and in some cases the interface layer itself has been successfully moved from having low level interaction with the business code to using transaction / object / service layers.  Seeing an application or platform or suite that offers a full catalog of accessible transactions and objects (wrapped in services) has become the norm (and even at the code level, structuring the classes in a similar way).

Example – Oracle e-Business Suite service catalog…


The micro-services architecture concept comes along and says “no no no, services should stand on their own”, to the technical point of being divorced from their implementation platform and container.  But at what level of granularity should BUSINESS SERVICES be implemented?  Can we build “virtual” applications that are logical composites of many “business micro services”, or do business transactions remain grouped as the combined code we refer to as “applications” and/or rolled up into “suites”?


Years ago I was giving a presentation describing how applications are decomposing, as they become integrated with and real-time linked to other applications and thereby becoming distributed applications – with a follow up discussion of integration reliability, monitoring, control, security, etc.  The primary change of that generation was the application no longer standing on it’s own, even though that “it’s own” may have been data being imported or updated from other systems regularly.  The distributed application moved data out of it’s own system and onto an integration connection point, with the advantage of making the data (or transaction) much more up-to-date – (semi) real time data and transaction movement.

But, my customers asked me, what if we are development NEW applications?  My answer - we would want to bundle the functionality differently, to allow more flexible combinations.

• Break our “Application” model into…

− Transaction Components

− Process Components

− Entities

− all exposed as Services

• Compose, Coordinate, Combine them into…

− Business Processes

− User Presentations

− meaning “Applications”

At the time, this still presented signification issues of managing, controlling, tracking, and securing.  API Management and Micro Services would seem (in theory, it’s early days) to resolve these issues.  Which means we can get to a direct discussion of granularity of BUSINESS SERVICES, allowing for BUSINESS ORIENTED MICRO SERVICES.


A Business Unit has one or many Business Capabilities.  A Business Capability is a series of one or many Business Processes.  And a Business Process has one or many steps, each being a Business Service.  From the IT perspective…


The Businesses processes map to Workflows or Business Process Automation’s from the IT implementation perspective.  In older applications workflow was encoded in the application itself.  Nowadays if the Business Services are encapsulated and exposed, they can be orchestrated externally via BPM (business process management), BPA (business process automation) or other integration layer controllers.


The question is, of what granularity should those Business Services be, and therefore the corresponding IT Services?  How “micro” should a “micro-service” be?  Is “Invoice Management” with “Create Invoice / Query Invoice / Update Invoice / Pay Invoice” a (large) micro-service, or is “Invoice Object” a micro-service encapsulating only the persistence and structure of the invoice, and Create / Query / Update / Pay Invoice each being a separate micro-service?  (With the connectivity overhead and external orchestration overhead).  And would breaking the functionality down to that level of granularity offer a sufficient value to compensate for the added overhead?

The answer is unclear.  The pure technology proponents would tell us to go full granularity, and not concern ourselves with the connectivity overhead – as that’s a container and implementation level problem – nor concern ourselves with the orchestration overhead, as that’s an analysis and mapping problem (with it’s own implementation overhead).  The deploying, hosting, and tracking of all those services should also be “easily” handled and managed by the environment.  We’ll see.

Martin Heller at TechBeacon writes, “If a service looks cohesive and deals only with a single concern, then it's probably small enough. If, on the other hand, you look at a service interface and see a number of different concerns being combined, perhaps it's a candidate for further decomposition. At the other end of the spectrum, if the service doesn't do enough to feel useful, perhaps you overdid the decomposition and need to combine it with a related service. It's very much like the game of "find the objects" that people played when designing object-oriented software 25 years ago, but now the objects are services, not classes.”

I believe this ties it nicely with the picture above.  A “business service”, such as Invoice Management, with it’s functions of create / query / update / pay, composes nicely into an IT service…and therefore (in the new terminology) a “micro-service”.  The only reason to decompose further would be IF there was a desire or ability to substitute the granular functionality from other developers / development teams / or application vendors.  And while that’s conceptually feasible, managing service catalogs at that level is simply not (yet) reasonably viable.

What this says to me is that the ONLY difference between Micro-Services and the SOA Services of last year is the expectation that the corresponding code is sufficiently encapsulated to be independently deployable.  And while that independent deployability offers some interesting theoretical advantages, those advantages bring major management and control issues that have not been doing so well in practice.  Example – managing the service catalog.  SOA Governance, particularly design time governance, has not been successful in the field.  Few IT enterprises have gotten the ROI from the vendor offerings in this space.  API Management seems to be changing that, but still the point is managing and coordinating hundreds to thousands of coordinating services is a daunting task. 

I would advise stepping carefully into Micro-Services.  Try a few small projects to understand the dynamics of the use pattern AND the management pattern.  There are advantages to be had, but risks as well.  It’s likely we’re seeing the future of software development.  But the control structures and supporting architecture patterns are not yet in place.  Tread carefully.

Dec 31, 2014

Big Ball of Mud Software

In the space of Software Architecture, the “Big Ball of Mud” represents “natural growth” – or the system that just adds and changes without ANY planned architecture.  (More on the Big Ball of Mud here.)  While we hear about it, and sometimes run into it as we have to solve project problems, how do you spot a software product in that mode?

Side note… while traditionally a Big Ball of Mud is discussing gradual changes to a system or program, we also see a Big Ball of Mud in enterprise architecture in unplanned natural growth of the addition of various systems and technologies, and interfaces and interconnections between them.  While dealing with spaghetti code is tough, dealing with spaghetti connections and systems is extremely expensive and risky – but is all too frequent.

Here’s a software product conversation I had this week


Please wait for a site operator to respond.   You are now chatting with 'Randy'.   Your Issue ID for this chat is LTK1219208815693X

Randy: Welcome to Unnamed Product Corporation Sales Support. My name is Randy, how may I help you today?

Akiva: Hello Randy. I wish to sync Outlook between my Outlook, which is on my office network and exchange service, and an Outlook that's on an isolated (no internet connection) secure network. So my questions are:

1 – is Product the tool for this (I actually only need to sync calendar)?

2 - will I be able to install it without an internet connection (since the sync computer is on a non-internet-connected network)?

(isolated computer does have a normal USB)

Randy: Product has a feature to synchronize Outlook with a USB stick.  But to install Product Internet connection is necessary.

Akiva: would seem to reduce the value of a USB sync.  (In other words, WHY IN THE WORLD would you sync by USB stick if you have an Internet connection?)

Randy: Could you please clarify your two computers are connected to the same network?

Akiva: no they are not.  Computer 1 is a normal internet connected laptop, connecting to an exchange server in office 1.  Computer 2 is on a disconnected private network with a private exchange server, with no external network connections.

Randy: In this case the only option to synchronize your Outlooks is USB stick profile (great, this is what I want to do), but to install Product is necessary Internet connection.

Akiva: Can you please confirm that? Your download page as a link that says "Full Standalone Installation* *Internet connection is not required during installation"

Randy: Just a second.  I just talked with our technician, he confirmed that you are able to install Product without an Internet connection, sorry for the mistake.

Akiva: It's ok. However, I’m trying it – it says it requires a particular KB patch to .Net 4.0 as a supporting component - which it tries to download. and of course fails - the computer has no internet. I'm now trying to download and pre-install that component. (So tell your tech you were 1/2 right.)

Randy: You are able to download Product trial version which has 14 days trial period and try it.

Akiva: Ok, important question for the tech. When I buy it, does it require an internet connection to verify/certify/check the license id you provide?

Randy: Internet connection is required to activate your license.  Also to renew it.

Akiva: So, to confirm, you can install it stand alone but not buy it and use it that way. You have a stand alone install, but it requires an internet component to download. So... the product CANNOT be used on a stand-alone computer for sync.

Randy: Yes, you are right.  May you have more questions for now?

Akiva: Well that's just frustrating. Kind of makes the USB sync feature useless. Oh well, thanks anyway.

So what happened here?  This company developed a non-network based sync option, then later developed an internet based licensing requirement that invalidates the non-network based sync option – basically making the product useless.  I wonder if they actually sell any of these, or sell them and have people demand refunds since it effectively is useless now.  Bad planning, no overall architecture to understand the impact of one feature set on the rest of the system.

Dec 25, 2014

Bad Integration by Design or How to Make a Horrible Web Service

To understand what makes easy integration or a “good web service”, it’s worth taking a glance at the historical methods of I.T. systems integration.  After all, business systems have been passing data around and/or activating each other, aka integrating, for almost as long as there has been commercial I.T. business systems (approximately since 1960). 

The first major “interface” method between systems was throwing sequential fixed-length record files at each other.  This was pretty much the only method for 20 years and still remains in widespread use, though mostly around mainframe and legacy systems.  The system providing the interface, either outputting the data or providing a format for which to send it data, defines a field by field interface record, along with header and footer records.  Because these are fixed length records, the descriptive definition (the human readable documentation) must include the format and length of each field, along with any specialized logic interpretation or encoding.  For example, if a record represents a person, which includes their gender, it might specify a 1 byte single digit field, with a 0 representing male and a 1 representing female.  (Given that this appropriate started in the early days of computing, there is also a strong tendency to minimize data size – save the bytes! – leading to additional encoding logic within the definition.)  Because the definition is fixed length records, no data typing can be enforced within the data format, only at time of programmatic interpretation.

So how did this approach work?  It worked great.  This is the base approach of generations of systems, especially financial and business systems. 

If it worked great, why don’t we do this anymore?

Answer: Because of the data typing (no enforcement in the format), the encoding (no enforcement in the format and not understandable without documentation), and other dependent logic (such as cross field validation instructions, example “if field 2 is female, then you may fill out the field 9 for number of pregnancies”), getting an interface build and correct would take 2-6 weeks per connection.  So while this method worked, it was time consuming to successfully implement.

API’s came along to allow direct activation, and defined a fixed set of data types required to activate.  This solved the problem of the first model’s data typing without enforcement, and part of the documentation problem (the data types became self explanatory).  Further, the API’s could define descriptive names for the data fields, thereby providing some self-documenting ability within the API.  A major improvement.

API’s, however, added a new problem: they were technology, and often version, dependent.  Meaning an API exposed on one system in one language in one release was compatible only with another system in a matching system / language / and version. 

Regardless, integration via APIs was easier and faster.  And it became the base technology that allowed Windows, Unix and other modern operating systems to move from being simply an execution starter and hardware interface to being a facilitator of interaction between applications.  It further allowed a real-time interaction that was not possible previously.  That said, figuring out and correctly using an API could still take days to weeks.  Embedded cross field validation and logic would often slow down the process.

API’s evolved in the next generation with REMOTE APIs.  Remote APIs moved the cross-application interaction to cross-system cross-environment interaction.  The original remote API technology with commercial success included DCOM, CORBA, and RMI.  All of these commercial implementations worked, but were very complicated and highly sensitive to perfect conditions.  And, for the most part, they were TECHNOLOGY specific (as well as being version specific).  So while they began to offer the new ability of remote invocation and/or coordinated system interaction, the environment had to be perfectly configured and matching technology and version.

Each one of these generations of integration technology worked within it’s context and solved problems not previously solvable – offering new abilities and new opportunities.  Yet their limitations meant they remained niche solutions for specific narrow problems. 

With the arrival of web services, a new integration level was reached.  Web Services offered all the previous abilities while adding key points:

- The data format is XML, and therefore self descriptive.

- The service and data format is defined with an XSD, and therefore is self validating.

- The communication protocol is firewall and technology neutral and friendly.

- The data format is technology neutral and supported by all development tools.

With these abilities added to the historical ones, integration moved from a major project effort to…simple, trivial, fast.  And with that change web services and integration became more than just commonplace, it became the way to do things.  (This brings some new problems, such as integration spaghetti and interconnection dependencies, but that’s a different discussion.)

So how do you make a horrible web service?  Simply strip away one more more of the primary advantages it offers.  Examples:

-- Serialize language specific objects into your web service as one or more data items.  For example, serialize a .NET object into your web service.  The result, a web service that can only work with .NET (and of the appropriate version).  Yes, I’ve seen this done.

-- Place “codes” in data fields in the web service.  For example, make a field <Gender> where “3” = Male and “1” = Female.  Then explain to the user of the web service that they must download your table of codes / values to insert the correct values or interpret the values.  This, sadly, is a not-uncommon error.

-- Structure the XML as just a flat list of fields even though it could be placed in a hierarchy, or is in a hierarchy in the objects or database tables.  The corollary of this error is to expose multiple services for each level of a hierarchy rather than one service with a hierarchy.  This is the error of sharing data and not the business function / transaction.  All too common.

In general, by stripping a web service approach to an earlier generation by stripping an ability, the result is a service of limited use, difficult re-use, and challenging to understand.  Each of these problems turns into extra time and complexity, the exact opposite of what services come to solve.

I recommend avoiding these errors.

Oct 21, 2014

CIO Interview–Integration (SOA / SOAP / Web Services) Impact

Some years ago I interviewed a CIO of a Fortune 500 IT vendor as part of an Integration Improvement project. His responses helped shape the goals and roadmap of the project, as business drivers and goals should always be taken into account in how one models the architecture and integration space.   The interview gives a great view of IT management business drivers in the integration space.  Company identifying information has been removed.

Question: How is integration?

-- Almost everyone at our company is an expert on Information Systems.

-- Our culture - everyone (department/division) believes their own numbers and requirements and doesn't believe anyone else's. Words like "mutual understanding" or "common terminology" didn't exist in the past.

  • The reason there is no real enterprise integration because of how the company is running. Each unit built their own systems.
  • The other units came to IT and said "buy this for us or build this for us, exactly per our requirements".
  • Result, we have a large number of systems, majority home grown because building to our own specs was (considered) easier (in the past.)

Today we want to do the opposite. We want to create best practices and show to our customers "this is the best way to manage your business". To implement the best practice by picking the best packages (vendor software) that does those processes. (Best of breed approach - but the business idea behind it is best-of-breed to implement specific target business processes, some of which may be offered as solutions to customers.)

Now we are not "developing" but coming with a solution. But there is no one company that can bring the best solutions across all the lines of business. Therefore we understand that we'll need multiple vendors to provide all the needs.

We have a variety of lines of business. And there are different requirements in the different lines. So we have to come up with the best practices for each business unit...and they don't want to bother with integration (they don't see integration are part of their business goals - interoperating with other business units is not a focus for them). But they want us (IT) to build them one integrated view (across business units) of what's going on.

The result - different packages (that we didn’t build), each from a different vendor. Some according to standards, some not [because sometimes the best solutions aren't standards compliant) and sometimes even different solutions from the same vendor are operating per different standards (a result of many vendor portfolios being built via acquisition).

The expectation is to build a common integration layer that can integrate the various tools from the different vendors….AND provide a common process view and data view that we can analyze.

...and the cost of maintaining this should go down (in comparison to today) and should continue to decrease.

So adding new systems shouldn't create a linear growth in the cost. The company wants to measure IT...what is the cost according to what it should have been by adding more and more systems? And how does it compare to the current cost of today. They want us to reduce the real maintenance cost of today even though we're adding systems and capabilities.

They want IT costs NOT TO GROW but provide additional capabilities (do more, spend the same).

How to do it? Some kind of miracle (we have to find).

Question: What's the time frame to meet these goals.

Answer: No one expects us to come up with all of this, this year. But what they do expect is new systems work should be meeting the new goals.

ROI - people have to commit to improved project impact by showing over 5 years how costs in particular areas (for example integration support) will be reduced.  In order to make decisions we need to quantify everything as best as possible...even knowing some are assumptions that you can't really quantify at this stage. So we need to build a strict ROI but we can, for example, quantify values of agility (an example I provided).

Because we're in the process of replacing many older systems, we can easily quantify major ROI returns much faster. (Meaning switching old systems to a new method might have a very long ROI period, but incorporating the new method with the implementation of a new system eliminates a "change over" cost as that's already part of the old system replacement cost.)

After we make a decision on best internal practices, the cost of new projects will automatically include the overhead of doing the interface according to the right method/pattern/etc. And it's our job to convince the managers that it has the long term value in reducing the maintenance costs.


Question: Cultural drivers?

Answer: Our main driver, being able to respond to customers needs and changing business. As a company operating in technology fields, the technology is constantly changing as is the business environment.

We must be able to integrate new things very fast, and do so without destabilizing existing systems or significantly increasing support costs.

Today the capabilities we (IT) give to the company are very low. We're not meeting the business requirements of new functionality the business demands. There is a long back log of features the business has requested that we can't provide within our available budget and resources. We need to be able to delivery fast and agile.

Aug 20, 2014

How an Open Data Feed changed Israel’s Civil Defense


Israel is under frequent, as frequent as every 10 minutes, rocket attack from Gaza.  While the Iron Dome rocket interception system has become famous as a technological marvel in the defense of the country, there are other technological marvels of note as well.

The first step of any civil defense system is getting the civilians out of the way or under cover.  Israel has a network of neighborhood bomb shelters, building bomb shelters, and (in new construction) a “hardened room” in every private residence and on every floor of every office building.  When air raids were measured in hours or tens of minutes, all of these were adequate together with a nationwide network of air raid sirens.

But modern circumstances have brought two new problems:

- Rocket attack warnings are measured in SECONDS.  Fifteen seconds in towns near border regions, and 90-120 seconds in the center of the country.

- As the country has suffered suburban sprawl, built malls and cinema mega-plexes, modern skyscrapers, joined the problems of traffic jams, and built it all with modern climate control (meaning sealed or closed windows and A/C), HEARING alarm sirens has become a problem.

Like any government agency, the Israeli Civil Defense department – a division of the Israeli army – has implemented big project approaches to these problems.  A radio based pager like messaging device… too expensive except for large businesses or office buildings (which can then manually alert tenants).  A cell based pager messaging device with digital output… with reception problems and a complicated interface requiring special software – again making it of limited use.  The newest addition, SMS messages to all cell phones from cell towers in an alert area, a massive project that required integration with all the cell phone providers but only results in a regular SMS “ding” – making it useless.

A lot of effort and a lot of money with the problem continuing to grow and the current solutions offering only limited impact.

But then something amazing happened.  The Civil Defense department public web site integrated a real time alert box onto the site.  It was unnoticed by almost everybody except for a young man in southern Israel in a community frequently targeted.  Since a web page is, by nature, open source, he looked into the page to determine where they were getting their data – their real time data of civil defense alerts for Israel.

He took the data feed, a nicely formed JSON data URL, set up a server polling it, and built an Android client.  This became the first “Code Red Israel” alert app.  Someone else contacted him and asked to use his server, and built an iPhone edition.  This was 2 years ago.

With the current conflict and the terrorists expanding their targeting to civilian cities and towns across Israel, the apps gained notoriety.  But so did the people interested in creating additional abilities, options, and clients.  And an explosion in apps and abilities has been created over the past two months.

Examples include: real time alert monitoring web pages (in Hebrew and English), extensions for Chrome, iPhone and iPad apps that offer various sounds, filtering by city, maps of alert locations, commenting to share thoughts of being targeted, Android apps to do all of the same – in Hebrew, English, or Russian (major languages used by segments of the population in Israel).  And like any app category, there’s become a competition between apps on offering the most useful features – even though most of the apps / pages / extensions do not charge or even offer ads (meaning they’re covering their development and server costs out of pocket).

Today while waiting in line at a grocery store or sitting in an office, almost everyone’s phone will go off if there’s an alert – some for only the local area, some for the whole country (as each person prefers).

There’s a key additional point.  The data feed seems to be providing the alert data seconds before the actual sirens go off.  So it is possible to get the alert up to ten seconds before the sirens actually go off!  (Depending on the speed of the monitoring service.)

So while the Civil Defense department spent years and millions on building up a technological infrastructure, their biggest success was by accidentally offering an open JSON data feed.

My personal alert project is a web 2.0 site at http://IsraelSirens.com.

Jul 18, 2014

MDM & SOA - Layer, Repurpose or Replace?

An Architect Friend sent me this extended architecture question...

I recently joined a company that provides business consulting (via many MBAs) services related to sales and marketingMost clients are large pharma companies.

In addition to consultants, there are business process outsourcing teams (offshore) that do operations (like incentive plan management, report distribution, etc.)  There is also BI/reporting group that creates BI/DW solutions (custom ones using template approach) for large clientsPlus there is an Software Development group (SD).

Over the years the Software Development group of the company created various (10+) browser-based (.NET/SQL) point-solutions/tools to help consultants (and eventually some head-quarters users) perform specific tasks. For example:

- Designing sales territories and managing the alignment of reps to territories
- Custom ETL-based tools to perform incentive calculation
- Some Salesforce-like platform for creating custom form-based apps
The applications are architected as single-tenant – with some deployment tricks to be able to deploy an “instance per client” on the web servers. The databases are isolated per client/instance.  The tools are sold as if they are part of an integrated suite, but they aren’t natively integrated and require custom integration.

There is a custom grown ETL-like tool for interconnecting the tools to each other (but not standard connections since the data models are all “flexible” and not well defined) plus Informatica and Boomi to get data from clients.  Some clients use one tool, some use 2, some use 3, etc.  Some tools are used directly by the client, but most are used by the consulting teams on behalf of the client.
Lately, there is desire to make it all “integrated” across the company (SD + BI + all else)Two main themes are emerging (even prior to me joining): “common data model” and “SOA”.  There is also the question of letting existing applications function as-is and developing new ones on a more proper architecture versus trying to evolve the existing apps.
However, the understanding of how this applies to an Enterprise looking inward on its own systems and trying to align them, versus Independent Software Vendor (ISV) looking to build software for other Enterprises did not yet sink in… and concepts are being confused…
The tension between a standardized productized software versus customized (consulting company) software solution is not yet resolved.

I wanted to ask if you had experience in environments were an ISV was trying to define the enterprise architecture of their solutions for customers versus their own internal architecture.

Are there any case-studies or resources you could point me to get some reference architecture examples?

I usually do not like “next gen” approaches, but I am not seeing much potential in evolution of existing assets into an integrated state (they have a lot of “baggage” and features that were there but don’t play nicely with “integrated” world-view).

Here's my answer:

I wanted to ask if you had experience in environments were an ISV was trying to define the enterprise architecture of their solutions for customers versus their own internal architecture.

-        No, though I have built integration competency centers and projects that were providing service environments across very large scale enterprises of disparate divisions.

Are there any case-studies or resources you could point me to get some reference architecture examples?

-        Not that I know of.  I'm not much of a fan of such studies, mostly because the requirements and details are always highly complex, and those details directly affect the approaches taken.  Studies and reference architectures provide a nice high level structure – but the more you try to keep to them in the details the less effective they are (as they are mismatched to the exact situation).  I use bits of Togaf-9 from opengroup.org, bits of CBDI from Everware http://www.everware-cbdi.com/, and various tidbits picked up from Zapthink (though every few years they discuss the benefits of yet another framework).

I usually do not like “next gen” approaches, but I am not seeing much potential in evolution of existing assets into an integrated state (they have a lot of “baggage” and features that were there but don’t play nicely with “integrated” world-view).

-        It's a pretty standard problem: how to balance between what is, how it can be extended / expanded / reused, and what should be replaced / redeveloped / moved up to a new generation of technology, pattern and features.

- The problem you describe sounds like it crosses between SOA / integration and MDM (master data management).  Sometimes a SOA façade can provide an MDM operational model, with composite services doing multi-system queries, combining or rationalizing the result, and presenting single meaningful "views".  In other cases it's the SOA abilities enabling MDM to do it's job, which often involves signification bi-directional synchronization.

- The MDM tools tend to be heavy, and the business and systems analysis work (which system wins when data is in conflict, for example) is a major portion of the success or failure.

- That said, IF you are only trying to get views of the data, I am hearing reports of good success with some of the easier BigData tools (such as MongoDB).  Success meaning they are able to develop and deploy meaningful business results in months, whereas MDM and big integration SOA projects almost always take over a year.

Blog Widget by LinkWithin