During the last two years smartphones have developed at an incredible pace. The future seems to point to even better performance with more muscle underneath the covers. Many point out that the smartphone today is in fact a small computer.
I remember back in my college days that one of the key things taught to me was that computer design is really about an interplay of 3 resources. The CPU, the memory and the I/O bandwidth. Depending on which of these are constrained, the design and architecture of computers can change. For example, the classic CISC vs RISC debate, was in many ways triggered by the fact that the CISC model was designed for an age when memory was expensive compared to CPU processing power and so minimizing memory was important. Hence the complex instruction sets. RISC came into vogue, when it was realized that due to cheaper memories, it made sense to use more memory to keep simpler instructions, so that the CPUs could run faster. As the network technology has improved we have swung from dumb terminals to smart standalone PCs networked together, and now we seem to be swinging back to semi-dumb terminals as cloud computing becomes more popular.
In the end, it is always critical that the three legs balance themselves. Any one leg going out too far ahead by itself would not be useful. The other legs had to be adjusted in some way.
These days, I’m thinking there was an important fourth leg that was missed (either by my professors or maybe because I nodded off 🙂 ) and that is the user interface. The move to graphical user interfaces in the 90s showed this. If we had stuck with DOS like interfaces I’m not sure how much more of the CPU, memory, network I/O bandwidth increases that came in the 1990s we would have needed. But thanks to Microsoft Windows and X-Windows this increased in leaps and bounds. (Of course some might argue this was not for the better.)
So it is not a three legged stool but now a four legged chair.
More recently, I think the mobile phone has I think brought the user interface again to the forefront in this discussion. Before the current generation of smartphones, even though the technology was there, there were no paradigm shifting smartphones. Yes, blackberry and the nokia N series etc. did exist but they did not grab the public mindshare until the iPhone came onto the scene. Apple’s contribution to this in many ways was to show how critical user interface design was – in some ways I would say they were the ones to bring about a quantum leap in user interface design for the masses. (Note that Apple’s iPhone never had the most powerful CPUs nor most memory, and it didn’t even have 3G in the first generation.)
The interesting thing today, is that all of the four legs seem to be improving at a rapid rate in the mobile phone. The CPUs are going to 1GHz and multi-core architectures, memory is always getting cheaper, bandwidth is increasing with 3G, 4G and improvements to Wi-Fi and other wireless technologies such as Bluetooth. But it seems of these four legs – the one that we are not able to as fast as the others is the fourth one.
We are stuck with a screen size that can not be too large, otherwise we could not carry it around. This in turns hampers how we can interact with the phone. Not only in our ability to use input, but also in many cases our ability to consume data on a small screen is limited compared with, for example, the much larger desktop or TV screen. The keypad is always going to be somewhat uncomfortable though one can become used to it. The touch interface has been a huge improvement. But how far can it go now given our pudgy fingers ? Voice interfaces have improved greatly. Maybe that can bring a further increase ? Will we need to rely on more futuristic user interface devices such as projectors, or “smart” glasses with heads up displays ?
Another direction things can go in is shown by Google Goggles. The user interface can potentially be made more intelligent by offloading more of the processing to the cloud. In many cases that may be the best because for somethings such as image recognition you need access to huge databases and supercomputer like processing power. Note this sort of approach relies not only on increased CPU in the phones but also greatly improved network I/O bandwidth.
What do you think ?