Back to the Future!

Déjà vu is a wonderful thing.  It struck me the other day whilst I was surfing the web catching up on news.

Back in the 70s, when computing hardware was really expensive, we had thin clients.  The computer you used was little more than an I/O terminal and all processing occurred on the server.  This evolved in the 80s, thanks to X, into graphical thin clients which look remarkably similar to the interfaces you still see today in most computing environments.

Then computer hardware became cheaper.  People wanted to do more with their computers, and they wanted to do it where it was more convenient for them.  The personal computing revolution occurred.  Computing moved from the office to the home.

Then networking became cheaper.  People wanted to do more with each other, and they wanted to do it in a form that was more friendly to them.  The World Wide Web was born. Computing moved from the desktop to the browser.

Then computers became smaller.  People wanted to stay connected and take their computers with them.  Enter laptops and smartphones.  The era of mobile computing was upon us.  Computing moved, well, everywhere… and it hasn’t stopped moving.

During all this time, people’s expectations of software have continued to increase.  That translates to more code, more machine instructions, more processing and, ultimately, more power consumed.  But while processing speeds have continued to rise exponentially,  power availability and density hasn’t.

It wasn’t so obvious when we were all plugged into mains power, because you never saw the electricity that your computing activities were consuming.  But once computing became mobile, it became painfully obvious to everyone.  Battery life sucks.

macbookair1-101016

As can be clearly seen in the above laptop, batteries already take up around half of the volume inside a mobile device.  There is, obviously, a finite limit on how much power can be stored in such devices… yet our demand for more and more powerful applications seems endless.

The reason why Steve Jobs did not allow Flash on iOS was because of power consumption.  No matter how brilliant the hardware, one fun but poorly written Flash game could drain the battery completely within tens of minutes.  And who would be blamed for the device suddenly powering off?  You got it – Apple.  Not ‘CrappyFlashDeveloper47238’ – Apple.  Who would get slammed in forums and the press – Apple.  Who would lose money because of it – Apple.

Awesome apps use up awesome amounts of power.  It’s as simple as that.  So the only way we are going to get more and more awesome apps is if we work out a way to shift the power drain to somewhere where it doesn’t matter (as much).  So rather than do all the processing on the device itself, we shift the processing somewhere else.

Thus 30 years later and we’re headed back to the 80s and thin client graphical terminals.  The new terminal is your phone, or your tablet, or your laptop.  You use the device to input your commands, and these commands get sent via the Internet to some server – somewhere – that is running the program you are using.  The server executes your commands and updates the graphics buffer which is then sent back over the Internet to your device.  Your device then just throws it up on your screen.

Since your device is performing minimal processing, you don’t need any more circuitry than is required to handle the I/O, networking and display.  You don’t need a CPU, you dont need fans, you don’t need banks of RAM, you don’t need a disk drive, you don’t need… a heap of expensive components.  The device can cost at least half the price and have roughly double the volume dedicated to its battery and, since it is consuming perhaps one-fifth the power (because processing has been offloaded to the server), it would end up with 10x the battery life.

Imagine a $150 laptop that lasts for 40 hours not 4, or a tablet that lasts 80 hours not 8.  Having a portable device that you can use all day, every day, and only bother to recharge once a week, now that would be awesome.

This Back to the Future approach also solves two other problems that are argued about over and over again on countless forums, and they are “mac vs pc” and “native apps vs web apps”.

If all of the processing is done server-side then there’s no reason why you can’t connect to different servers for different types of processing.  Connect to a virtual Mac and run Photoshop; connect to a virtual Windows box and run Quicken.  You get the best of both worlds with no dramas.

The native vs web app argument also becomes moot because you can install any native apps you want on the server you are connecting to.  No point trying to argue that your app needs to work offline either, because this approach is a fully online solution and doesn’t pretend to be anything else.

Case study:  I spend only 1.09% of an average week outside of Wi-Fi range.  I spend less than 0.1% of an average year outside of 3G range.  Assuming the mobile device has both forms of connectivity (a fair assumption) its performance would be perfectly fine for me 98.91% of the time.  I’m not sure anyone can justify paying twice as much for a device that has negligibly better performance and/or availability – I certainly can’t.  Especially when, during those times, I’m usually driving and couldn’t/wouldn’t be using it anyway.

If you don’t have decent connectivity when you need it then, sorry, but that’s your problem.  Don’t use it as a reason to criticise ideas on other people’s blogs — fix it.

Interestingly this new form of computing is already with us, at least in nascent form.

Example 1:  Take a Google Chromebook and install the Citrix plug-in.  Instant access to web apps and access to native apps via your company or any organisation that offers Desktop-as-a-Service.

Example 2:  Install OnLive Desktop onto an iPad or Android tablet to gain access to office apps.  Sign up at OnLive with a browser using a Mac, PC or TV to play streaming video games over the Net.

Example 3:  Gaikai and Nvidia Grid and … all do cloud computing/gaming.

With heavyweights pushing this strategy from different directions (Sony recently purchased Gaikai for $380M, for example) it’s a sure sign that the concept is solid.  Now all we have to do is wait and see what implementation and pricing model captures the public’s imagination first.

Advertisements
This entry was posted in Stuff and tagged , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s