Skip to main content

Posts

Data Gravity and Cloud Uplift Woes

When I originally learned about Data Gravity via David Linthicum’s excellent podcasts, a key architecture point that stuck in my mind was: your application needs to be close to its (interactive) data sources.  (Data Gravity has since picked up a wider yet less useful definition as more applications cluster together in a “data galaxy” to be close enough for large amounts of data to be able to interact.) Why?  Every interaction with a database has communication (network time) overhead.  Most applications are built with their servers and database on the same LAN / subnet / vnet – usually in close physical proximity, specifically so that time is minimized.  Every hub/switch/router adds time to the request, so while a request may pay 10ms when it’s local, add time per “hop”. Application performance and tolerances are implicitly built around that response overhead.  If data takes too long to return, the developers will likely adjust their code to do bigger queries, wider joins, etc.  But wha
Recent posts

Serverless - Unintentional Granularity Problems

  Serverless is great, but serverless functions rarely run by themselves... they're usually connected to data sources - and frequently that means SQL databases. In the graphic attached, my cloud SQL database is collapsing under assault by a serverless function that's scaling up instances to meet demand.  Here's how we unintentionally created this problem: To allow for parallel processing to speed processing, we've taken a transactional billing file, broken it into groups of 30 transactions, loaded those into messages and queued them.  We then have a serverless function that's listening for these events, and if messages are still waiting after a certain amount of time, another instance is spawned to begin handling the waiting messages and processing. When it was sent thousands of transactions, this worked great and the business users were suitably impressed with the processing speed (which dropped from many hours to a few minutes).  When they began sending larger tra

Cloud Native is already Yesterday

Cloud hosting, de-coupling (server) virtualization from the supporting hardware and moving it to a Public Cloud (letting the physical tier and network tier be someone else’s problem)  is old news .  Plenty of IT shops are still struggling with what to move, when or if to shut down their data centers, and the issues of data gravity… regardless, Cloud hosting has passed mainstream adoption and moved far into the Late Majority. The surprise is, Cloud Native, replacing ‘virtual machines’ and the implementation of application server (software), database server (software) and the like with “As A Service” Public Cloud services is also far along.  And  regardless of how far in the adoption cycle it is, it is far along in technology maturity. Far enough that the Public Cloud vendors have already built the next generation upon it. The next generation and next cycle, Cloud as the Application Service Layer, is already here, already market viable, and already becoming the

Enterprise Strength Integration - as of 2011

Time for another historical integration presentation.  From 2011 to a major corporate client, visually less pleasing (and I had to reformat it off the customer’s corporate template), but the information value is high.  Every presentation is contextual, but I think this one's got some pretty strong hints.  Ongoing credit to @ Hillel Fuld  for inspiring me to contribute, and congrats on his new  https://www.hillelfuld.com . While I've seen the value of sharing and contributing in private live and community, Hillel demonstrates the strong value of sharing in the business world and being a mench.   While these presentations are dated, I think they offer value understanding where the tech world was, how it has built up to where it is, and some key ideas that can make a difference in projects today. Enterprise Strength Integration (as of 2011) from Akiva Marks

No SOA ROI - SOA is Dead? Getting real ROI from Integration - as of 2009

Continuing sharing of my historical enterprise architecture presentations, with a strong focus on enterprise systems integration.   Ongoing credit to @ Hillel Fuld  for inspiring me to contribute.   While dated I hope they offer value understanding where the tech world was and how it has built up to where it is.    This presentation is older, 2009, less flashy and more wordy.  But may actually take you deeper through enterprise integration concepts.  Please comment if there are any points you'd like to discuss. No SOA ROI - SOA is Dead? Getting SOA Value from Akiva Marks

SOA Methodology & Strategy - as of 2010

I've decided to share my library of presentations focused across my years in enterprise architecture in a US Fortune 50 corporation and my years as an enterprise architecture & integration consultant and IT management consultant in Israel. I hope you enjoy. While they may be dated they may be of value to understand where some of our technologies and software catalogs of today are at, and where there can be some improvement. Here's the first... SOA (Service Oriented Architecture) Methodology circa 2010. I credit Hillel Fuld's recent interview for the inspiration to share. SOA Methodology - Strategy (as of 2010) from Akiva Marks

Amazon S3 Easy Scripted Backup from Windows for the Enterprise

I don’t normally post code, nor am I normally implementing scripts myself.  But sometimes to learn the ins and outs of a capability, you have to dive in and try it out.  Since I’ve been working with Amazon Web Services (AWS) via Windows, I’ve found a remarkable lack of sample scripts for using it, so I’m posing my little project here. For me, with heavy Unix scripting experience in my distant background, using Powershell and the AWS Powershell add-in was a no brainer.  While the syntax of Powershell is significantly different from Unix Bourne shell, the capabilities are practically identical, including piping. Now for the requirements: - I needed to backup to the Cloud for an offsite backup. - The data needed to be encrypted with a client-managed key, but I had neither the tools nor onsite CPU or extra storage for client-site encrypting. - The backed-up data needed to track changes or access. - To show any modifications, the backed-up files needed to manage versions –

Cloud Flexibility encounters IT Procurement Inflexibility

The Cloud.  Whether it’s a mega-cloud provider in the public clouds, a private or managed cloud, on premises or off prem, cloud is all about flexibility.  Add an instance, add a service, it’s just a click. (Note the dynamic cloud, the ability of an app to dynamically expand it’s resources as load increases, remains mostly hype.  While this was one of the first great promises of the cloud-o-sphere, it has not translated into reality.  Even further, just shutting down non-production environments during the night or on a schedule can be a significant cost saver – but is not offered by the mega-providers.  At least in this area 3rd party cloud support vendors have stepped in – but many are not aware of this and end up with lots of idle time on compute nodes.) While the cloud providers offer that wonderful relatively instant service with a click, each one of those clicks carry a cost.  And cost means…IT procurement, the department focused on getting the best deal for their IT dollar

Continuous Integration vs. Micro-Services

  I was reading Mike Kavis’s Do This, Not That: 7 Ways to Think Different in the Cloud and encountered what initially sounds like reasonable advice…but becomes absolutely unrealistic.  I’ll explain why at the end.  Mike writes… Think Empower, Not Control The first thing many companies do when they start their cloud initiative is figure out how to lock it down. Too often, the people who own security and governance spend months (sometimes years) trying to figure out how to apply controls necessary to meet their security and regulatory requirements. Meanwhile, developers are not allowed to use the platform or worse yet, they whip out their credit card and build unsecured and ungoverned solutions in shadow clouds. We need to shift our thinking from “how can we prevent developers from screwing up” to “how can we empower developers” by providing security and governance services that are inherited as cloud resources are consumed. To do this, we need to get out of our silos and

Micro-Services vs. Business Service Granularity

  In the many discussions and architecture approaches I’m reading about Micro-Services, a key point seems to elude the conversation.  Namely, “What’s a Micro-Service?”  Not what are the technical properties of a micro-service, but rather what level of BUSINESS functionality should be encapsulated in a single service, a single “micro” service? One “formal” definition that’s floating around is this… ...an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies. - James Lewis and Martin Fowler via http://techbeacon.com/   From the SOA (service oriented architecture)

Big Ball of Mud Software

In the space of Software Architecture, the “Big Ball of Mud” represents “natural growth” – or the system that just adds and changes without ANY planned architecture.  ( More on the Big Ball of Mud here .)  While we hear about it, and sometimes run into it as we have to solve project problems, how do you spot a software product in that mode? Side note… while traditionally a Big Ball of Mud is discussing gradual changes to a system or program, we also see a Big Ball of Mud in enterprise architecture in unplanned natural growth of the addition of various systems and technologies, and interfaces and interconnections between them.  While dealing with spaghetti code is tough, dealing with spaghetti connections and systems is extremely expensive and risky – but is all too frequent. Here’s a software product conversation I had this week …   Please wait for a site operator to respond.   You are now chatting with 'Randy'.   Your Issue ID for this chat is LTK1219208815693X

Bad Integration by Design or How to Make a Horrible Web Service

To understand what makes easy integration or a “good web service”, it’s worth taking a glance at the historical methods of I.T. systems integration.  After all, business systems have been passing data around and/or activating each other, aka integrating, for almost as long as there has been commercial I.T. business systems (approximately since 1960).  The first major “interface” method between systems was throwing sequential fixed-length record files at each other.  This was pretty much the only method for 20 years and still remains in widespread use, though mostly around mainframe and legacy systems.  The system providing the interface, either outputting the data or providing a format for which to send it data, defines a field by field interface record, along with header and footer records.  Because these are fixed length records, the descriptive definition (the human readable documentation) must include the format and length of each field, along with any specialized logic interpret