Skip to main content

My Internet’s Too Fast


th (5)

Whether we’re integrating systems or building new SOA architected applications (with heavy cross-component communication that may be flowing between physical machines or across virtual machine clusters – which again drop down to separate physical machines), the network backbone makes a significant difference.

A good enterprise data center is running at least a 1gbit backbone, with portions or major interconnects at 10gbit.  Further, those good network engineers are properly segmenting the network, which makes sure the heavy traffic patterns have maximum capability (and aren’t going to be slowed down by users streaming internet radio).

Between applications in the core of the data center, we rarely run into network capacity or network speed as our major performance problem.  (Credit those network engineers.)  This doesn’t mean the issue should be ignored, if some applications are integrated in a high speed query or transactional pattern, there can definitely be a major performance benefit of making sure their connection moves up from 100mbit to 1gbit or from 1gbit to 10gbit.  Poorly architected SOA services and application integrations that have many granular transactions and/or heavy crosstalk can see a surprising performance benefit from a network speed upgrade or even just creating separate or multiple network channels (spreading the communications between them across multiple physical network ports).

Somewhat surprisingly, those easier options and solutions are becoming LESS viable with the move to virtual machine environments (and for mainframe integration), where one or a small number of physical network ports is shared among a large number of virtual machines or LPARs.  In these cases, increased network speed (with associated faster ports) is the only option.

This is a lower level physical problem that many an integration or SOA team will miss in trying to diagnose integration performance problems.

Interestingly, I was brought to this topic by a recent Internet upgrade at home.  The cost of high speed has become reasonable and it’s availability spread across wide areas.  I upgrade mine in the past week and was surprisingly disappointed.

Doing a number of checks with SpeedTest.net, I found I could get from 50% - 90% of my purchased speed to test points in my nearby area.  Outside my area the numbers are all over the place…

New York – 70%
Paris – 10%
London – 40%

What’s going on?  First is a basic problem… how’s my router performance?  When we were working with 2mbit and 5mbit Internet, putting a slow cheap processor in the home routers was sufficient.  Working with 10mbit, 30mbit or 50mbit Internet needs a LOT more router CPU power.

Second, just like in the data center example above, IF the routers or network paths I’m going through to get to the destination are over 50% of their capacity, my performance is going to suffer.  And having some Internet hosting accounts for private and family use, I can certainly note that my hosting providers have NOT upgraded the capacity of those servers.  (They may even have only 100mbit network ports.)

What does it mean when the default “low” speed for home users is becoming 10-20mbit, with low cost options of 20-100mbit INTERNET, yet the servers were built to serve 100 users at 1mbit each?

It means I’m not going to get 100% of my purchased speed because the sites I visit can’t deliver it.

My Internet is too fast.  Sad smile

Popular posts from this blog

Integration Spaghetti™

  I’ve been using the term Integration Spaghetti™ for the past 9 years or so to describe what happens as systems connectivity increases and increases to the point of … unmanageability, indeterminate impact, or just generally a big mess.  A standard line of mine is “moving from spaghetti code to spaghetti connections is not an improvement”. (A standard “point to point connection mess” slide, by enterprise architect Jerry Foster from 2001.) In the past few days I’ve been meeting with a series of IT managers at a large customer and have come up with a revised definition for Integration Spaghetti™ : Integration Spaghetti™ is when the connectivity to/from an application is so complex that everyone is afraid of touching it.  An application with such spaghetti becomes nearly impossible to replace.  Estimates of change impact to the application are frequently wrong by orders of magnitude.  Interruption in the integration functioning are always a major disaster – both in terms of th

Solving Integration Chaos - Past Approaches

A U.S. Fortune 50's systems interconnect map for 1 division, "core systems only". Integration patterns began changing 15 years ago. Several early attempts were made to solve the increasing problem of the widening need for integration… Enterprise Java Beans (J2EE / EJB's) attempted to make independent callable codelets. Coupling was too tight, the technology too platform specific. Remote Method Invocation (Java / RMI) attempted to make anything independently callable, but again was too platform specific and a very tightly coupled protocol. Similarly on the Microsoft side, DCOM & COM+ attempted to make anything independently and remotely callable. However, as with RMI the approach was extremely platform and vendor specific, and very tightly coupled. MQ created a reliable independent messaging paradigm, but the cost and complexity of operation made it prohibitive for most projects and all but the largest of Enterprise IT shops which could devote a focused technology

From Spaghetti Code to Spaghetti Connections

Twenty five years ago my boss handed me the primary billing program and described a series of new features needed. The program was about 4 years old and had been worked on by 5 different programmers. It had an original design model, but between all the modifications, bug fixes, patches and quick new features thrown in, the original design pattern was impossible to discern. Any pattern was impossible to discern. It had become, to quote what’s titled the most common architecture pattern of today, ‘a big ball of mud’. After studying the program for several days, I informed my boss the program was untouchable. The effort to make anything more than a minor adjustment carried such a risk, as the impact could only be guessed at, that it was easier and less risky to rewrite it from scratch. If they had considered the future impact, they never would have let a key program degenerate that way. They would have invested the extra effort to maintain it’s design, document it property, and consider