Forrester Views Cloud/Web is Outmoded and App-Internet is the new model

LeWeb 2011 George Colony, Forrester Research “Three Social Thunderstorms”

Over the past several years the word ‘Cloud’ has been used and to some extent abused  almost to the point of being superfluous. Every technology company, provider and enterprise is immersed in some sort of “cloud” project although the exact descriptions of these projects may fall short of the NIST formal definitions.  I think as technologists we tend to rebel against the status quo in attempt not just to redefine the marketplace but also to claim for our own a new path as we iterate over the current challenges for delivering new applications and services.

Just as we have overused and bludgeoned the hell out of terms like internet, virtualization and web (the prior name cloud), we are bound to move into a new set of vernacular definitions such as intercloudinterweb, fog computing  or in the case of Forrester CEO George Colony APP-Internet.

“Web and cloud are .. outmoded” concludes Mr. Colony as he goes on to explain the App-Internet as the next model offering a “faster, simpler, more immersive and a better experience”.

The thesis for this conclusion is based on the figure above where the y-axis is defined as “utilities per dollar” and the x-axis is time. P is representative of “Moores Law” and speaks to the scalability of processing power. In reality the beauty behind Moores law is lost in translation. What Moore really said was “transistors on a chip would double every year” and subsequently David House, an Intel executive at the time, noted that the changes would cause computer performance to double every 18 months [1].

If you plot transistors per chip against actual computer performance you might see a different picture due to the thermodynamic properties and manufacturing complexity of CMOS based technology not to mention the complexity in actually utilizing that hardware with todays languages, application methodologies, libraries and compilers.

S is for the growth in storage which Colony calls the “Hitachi’s Law”. This predicts that storage will double approximately every 12 months. This also is somewhat contrived as the limits of scaling magnetic medium on disk are becoming extremely difficult as we approach the limits of perpendicular recording although maybe there is some promise with the discovery of adding NaCl to the recoding process[2]. Yes we can build bigger houses with disks packed to the ceiling, but the logistics in managing such a facility is increasingly hitting the upper limits. (imagine shuffling through a facility over 100,000sqft and replacing all those failed hard drives)

N is related to the network where Colony goes on to describe the adoption rates of 3G vs 4G. First and foremost nailing down exactly what 4G is and means is an exercise in itself, as most vendors are implementing various technologies under this umbrella[3]. With an estimated 655Million people adopting 4G in its various forms by 2010[4] and the quick adoption of new mobile devices, I think this is a bit short sighted..

But there is another aspect to this which is missing which is all of the towers that collect those 3G and 4G signals need to be back-hauled into the Internet backbone. With 40GE/100GE ratified in the IEEE, I suspect the first wave of 100GE deployments to be put into production in 2012 [5]

Colony goes on to say “If your architecture was based on network you are wasting all of these improvements in processing and storage.. the center (meaning the warehouse scale datacenters such as Google, Amazon and Microsoft) is becoming more powerful and the periphery is becoming ever more powerful…

His point is valid to an extent but not because of the P, S, N curves but because now that the devices are so powerful AND we have such a robust network infrastructure we can take advantage of all of this processing power and storage available to us. Afterall if transport pricing had continue to rise as the late great Jim Gray predicted in his paper on Distributed Computing Economics [7] we would not even be having this discussion because without the distribution of data capability in the network, all we would have were some very smart expensive devices that would essentially be a fancy calculator.

To that point Colony compares todays devices with their predecessors but as stated earlier its not a fair comparison. “In 1993 the iPad 2 would have been considered one of the 30 fastest computers in the world”. Unfortunately the problem space has changed from 1993 and if we follow Parkinsons Corollary called “Jevons Paradox” or the proposition that technological progress that increases the efficiency with which a resource is used, tends to increase (rather than decrease) the rate of consumption of that resource[6] it would be hard to compare these two accurately.

So the reality is that all of these iterations, from the early ARPANET viewpoint of access to expensive time-sharing computer centers to the highly distributed and interconnected services we have today are just a succession of changes necessary to keep up with the demand for more information. Who knows what interesting changes will happen in the future but time and time again we have seen amazing strides taken to build communities and share our lives through technology.

 So lets take a closer look at the App-Internet model.

Hmm. So how is this different from todays “Web-Centric” application architecture? After all isn’t a web browser like Chrome and Safari an “application”?.

Jim Gray defined the ideal mobile task to be stateless (no database or data access), has a tiny network input and output and has a huge computational demand[7]. To be clear, his assumptions of course were that transport pricing would be rising to make the economics infeasible, but as we know the opposite effect happened as transport pricing has fallen

[8]

“Most web and data processing applications are network or state intensive and are not economically viable as mobile applications” Again the assumptions he had about telecom pricing made this prediction incorrect. He also contended that “Data loading and data scanning are cpu-intensive; but they are also data intensive and therefore are not economically viable as mobile applications. The root of is conjecture was that “the break-even point is 10,000 instructions per byte of network traffic or about a minute of computation per MB of network traffic”.

Clearly the economics and computing power has changed significantly in only a few short years. No wonder we see such paradigm shifts and restructuring of architectures and philosophies.

The fundamental characteristic which supports a “better experience” is defined as latency. We perceive latency as the responsiveness of an application to our interactions. So is he talking about the ability to process more information on intelligent edge devices? Does he not realize that a good portion of applications written for web are built with JavaScript, and that the advances in Virtual Machine technology like Google V8 is what enables all of that highly immersive and fast responding interactions? Even data loading and data scanning has improved through advances in AJAX programming and the emerging WebSockets protocol allowing for full duplex communications between the browser and the server in a common serialization format such as JSON.

There will always be a tradeoff however especially as the data we consume is not our own but other peoples. For instance, the beloved photo app in Facebook would never be possible utilizing an edge centric approach as the data actually being consumed is from someone else. There is no way to store n^2 information with all your friends from an edge device it must be centralized to an extent.

For some applications like gaming we have a high-sensitivity to latency as the interactions are very time-dependent both for the actions necessary to play the game but also how we take input for those actions through visual queues in the game itself. But if we look at examples such as OnLive which allows for lightweight endpoints to be used in highly immersive first-person gaming, clearly there is a huge dependency on the network. This is also the prescriptive approach behind Silk, although Colony talks about this in his context of App-Internet. The reality is that the Silk browser is merely a renderer. All of the heavy lifting is done on the Amazon servers and delivered over a lightweight communications framework called SPDY.

Apple has clearly dominated pushing all focus today on mobile device development. The App-Internet model is nothing more than the realization that “Applications” must be in the context of the model something which the prior “cloud” and “web” didn’t clearly articulate.


The Flash wars are over.. or are they?


 So what is the point of all of this App-Internet anyway? Well, the adoption of HTML5, CSS3, JavaScript and advanced libraries, code generations, etc.. have clearly unified web development and propelled the interface into a close to native environment. There are however some inconsistencies in the model which allows Apple to stay just one-step ahead with the look and feel of native applications. The reality is we have already been in this App-Internet model for sometime now, ever since the first XHR (XMLHttpRequest) was embedded in a page with access to a high performance JavaScript engine like V8.

So don’t be fooled, without the network we would have no ability to distribute work and handle the massive amount of data being created and shared around the world. Locality is important until its not.. at least until someone build a quantum computer network.

over and out…

  1. http://news.cnet.com/2100-1001-984051.html
  2. http://www.techspot.com/news/45887-researchers-using-salt-to-increase-hard-drive-capacity.html
  3. http://en.wikipedia.org/wiki/4g
  4. http://www.fiercewireless.com/story/real-world-comparing-3g-4g-speeds/2010-05-25
  5. http://www.businesswire.com/news/home/20110923005103/en/Xelerated-Begins-Volume-Production-100G-Network-Processor
  6. http://en.wikipedia.org/wiki/Jevons_paradox
  7. http://research.microsoft.com/apps/pubs/default.aspx?id=70001
  8. http://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php (Note: This is more representative as a trend rather than wholly accurate assessment of pricing)