Skip to main content

International Clouds

Cloud Computing is clearly well into the hype cycle. Never being one to miss some good hype, I've been paying attention to advise my clients on whether and when to pay attention to this rising option.

One of the big factors in consideration is my current location... I'm working outside the U.S. My local country is heavily wired, offers high speed broadband (2-10mbit) to 100% of the country, 50mbit home connectivity in the major cities (100mbit next year), and even offers cell based mobile internet connectivity at 2mbit or 3mbit (competing companies). Company and office connectivity is typically equal or better.

The in-country data centers and backbones between them are very high speed. Ping times from in-country web sites typically run 30ms across the various data centers, backbones and ISP's / hosting sites. All in all, compared to the US that's seriously high speed.

Yet, all of that is in-country. The one area where the local internet infrastructure is weak is to-the-US bandwidth. Typical ping times to the US run 250ms at non-peak times, and 400-600ms during peak times. This is not only a delay problem but also a bandwidth problem. (It's 2-3 times better to Western Europe, but that's still not so good.)

Some of this is because the local internet users greatly appreciate American media, music, TV shows, movies, video clips, video chatting and Skype'ing people in the US. High bandwidth activities that push capacity utilization on the international links way up during those evening hours when everyone gets home from work. And since the long-haul international pipes are very expensive, the local backbone providers keep the utilization high, just below the serious pain point, to justify the cost.

What's all this mean relative to cloud computing? In local IT shops, considering cloud resources within the local (national) internet loop may be a very viable option with a very high speed local loop. Further, in theory even web 2.0 applications could utilize cloud resources to present functionality directly onto our user's desktops.

But utilizing US based resources is a much more iffy proposition. Let me not waffle there, it's not an option. Though (for example) Amazon S3 may offer a MUCH better cost:utilization ratio than renting a server for disk capacity at a local hosting center - or dropping some EMC storage into my enterprise data center (with the requisite backups and disaster-recovery site duplication), the international performance considerations make it not a viable option.

So the question for cloud computing in my current country base is...will cloud service vendors offer international clouds? Since the local size / market is about the same as a single mid-size US state, will cloud service vendors consider this market worthwhile? (Perhaps as either a test market to get facilities up to speed for larget markets, or offering franchise opportunities to local businesses that wish to offer the services. Or perhaps it leaves an opening for small (by US standards) start-ups to offer and develop services which they can then try to push into the bigger US or European markets?)

In any case, two recent topics - one on whether a "private" cloud / inside company cloud really qualifies as a cloud and noting that Google is getting into the data link business both bear on this question.

Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disaster – both in terms of th

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider