Skip to main content

Cloud Native is already Yesterday

Cloud hosting, de-coupling (server) virtualization from the supporting hardware and moving it to a Public Cloud (letting the physical tier and network tier be someone else’s problem) is old news.  Plenty of IT shops are still struggling with what to move, when or if to shut down their data centers, and the issues of data gravity… regardless, Cloud hosting has passed mainstream adoption and moved far into the Late Majority.



The surprise is, Cloud Native, replacing ‘virtual machines’ and the implementation of application server (software), database server (software) and the like with “As A Service” Public Cloud services is also far along.  And regardless of how far in the adoption cycle it is, it is far along in technology maturity. Far enough that the Public Cloud vendors have already built the next generation upon it.





The next generation and next cycle, Cloud as the Application Service Layer, is already here, already market viable, and already becoming the base for the next generation of applications.  And leveraging Cloud Native and Cloud Application Services, the pace of the change has significantly accelerated.

Technology & software changing at pace is nothing new.  Every mid to large IT enterprise struggles with a tech portfolio and its associated life cycle.  And almost no organization can afford to refresh its full tech portfolio and ends up managing generations of software and technology.  The shift to Cloud is not only a technology shift, it’s also creating increased pace of the change cycle.  

The shift to Cloud started straightforward… once virtualization became the data center tech stack, the deployment of virtual machines and their resources (memory, disk, cpu) was already disassociated from the physical platforms.  There were (and still are in many a private data center) system and resource admins to manage and plan the resource allocations and supporting equipment purchases, and deployments, and configurations, and patching, and maintenance, and and and…

The move to the Public Cloud moved the management from those system and resource admins to the Public Cloud provider.  The management and maintenance of those resources, the planning, ordering, and installation, patching, and and and… became someone else’s problem.  The IT organization could focus on the control and cost, but no longer has to be concerned with the management and operation of the underlying equipment, network, infrastructure, etc.

Which is awesome. Jump to the Public Cloud, effectively outsource control of the lowest level of IT infrastructure – and retire the majority of the physical data center… let Amazon or Microsoft or Google build and maintain it.  For IT enterprises, that’s the initial migration to the Cloud.  (This was initially referred to as IaaS, Infrastructure as a Service, when vendors and architects were trying to categorize cloud capabilities.)

And then came Cloud Native.  

The Public Cloud vendors began providing database services, queue services, storage services and more. If an IT organization needs a database, it could have a system admin spin up a virtual machine, a database admin install the database software, set up backups, cluster it, set up monitoring, and it’s ready to be used… then bring in a DBA to tune, optimize, periodically patch, yearly version upgrade – cost cost time cost, all overhead (no direct business value).  

Or use a Cloud “database as a service”, pick the type of database needed and have a Cloud database at the ready.  The Cloud provider spins up the capacity, the database instance, offers a selection backup options, clustering options, and keeps it patched and version upgraded.

Take the underlying infrastructure components and use Cloud “as a service” offerings, and be Cloud Native. (Is this advanced IaaS or moved up to Platform as a Service, the formal industry answer is “yes”.)

But now the Cloud Providers have moved both up and down the stack – offering more Cloud Native capabilities and offering Cloud as the Application Service Layerbuilt on their own Cloud Native stack.

Down the stack

Build, install and manage a container manager (aka Docker and/or Kubernetes), or use one of various “Containers and Kubernetes as a Service” options(for example AWS Fargate which auto-manages most aspects of the container environment – at an additional cost, but significantly reducing the need for expertise and operations staff). This effectively lets a team use “containers” as if they were serverless (but allows for long running processes).

Serverless takes this to a greater extend, allowing teams to deploy and run code without any infrastructure setup – virtual or otherwise.  (But has limitations on running time, making a place for both containers and serverless depending on the business and programming approach used to meet it.)

Up the stack:  

Code and security validate an application security structure (with supporting LDAP database) or use “User Management as a Service”(such as AWS Cognito).  Instead of weeks to months to build user management (and it would be wise to have it security tested), have a fully capable service added that is security certified completed in days to weeks.

On top of the stack

Take services being offered, such as AI services (built using cloud native database services built using basic cloud virtualization services), build up the model and offer business capabilities…as a service.  Examples such as AWS Comprehend (OCR interpretation of scanned documents) or AWS Comprehend Medical (OCR specifically focused on medical documents), or AWS Lex (voice to text – voice enable an application).

Cloud as an Application Service Layer means moving from building with legos to building with lego kits. 


What’s the impact to the IT enterprise?  

-      Much faster application development, with supporting capabilities being used (as a service) rather than developed.  This is a boon to the business, but will pose challenges for IT to adjust processes to accommodate the pace (CI/CD, continuous and automated QA, and user training).

-      Decrease in operations costs (though a significant portion of that will transfer to the bill for the cloud vendor services) through eliminating equipment, data centers, and lower level operations personnel.  This may mean repurposing staff, retiring vendor relationships, and figuring out how to manage software licensing in a dynamic environment (with each software vendor having different approaches).

-      A need to build up management capabilities around the services being used.  While cloud resources are easy to activate, they are tricky to track and de-activate.  Bills can grow and unused resources not be released if it’s not tracked and managed.

-      Change change and more change.  While IT has been trying to manage operations and security on the existing tech portfolio, the cloud vendors have pretty much had an open field with no legacy portfolio to support.  They’ve applied their full capabilities to creating new features, and are bootstrapping – showing how fast tech can move based on Cloud Native.  AWS Comprehend Medical shows an example of veering directly into business features.  

The use of Public Cloud is only going to accelerate, and their Application Server Layer capabilities plus AI and Machine Learning capabilities are quickly moving software into new capabilities.  Is your organization Cloud Native+ ready?


Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disaster – both in terms of th

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider