The new interfaces are winning people over because they are based on usage patterns instead of choices. The key thing about new UIs is that they are contextual – presenting the user with minimal components and then changing in reaction to user gestures. Thanks to Apple, we have seen a liberating movement towards simplistic, contextual interfaces. But can these UIs become the norm?
Over on his blog, Alex Iskold has written a wonderful piece on The Rise of the Contextual User Interfaces. In it he contrasts old school traditional user interfaces from the days when Microsoft Windows dominated everyones interaction with a computer, to the new generation of contextual user interfaces that the likes of Flickr, 37 Signals, Shelfari etc. all seem to have embraced.
I think contextual UI’s are something that we all subconsciously appreciate but don’t really think about , they just seem to work, they just seem to let us do what want to – and therein lies their beauty. It’s their elegance and their simplicity that makes them a pleasure to use. They only tell us what we need to know, if we need to know it, they don’t confuse us by presenting a plethora of options that we then have to decipher before we can continue, or indeed hide important functionality behind an “advanced” setting somewhere. It’s for this reason I completely agree with Alex when he describes how one of philosophies of the old UI approach was entrenched in the idea of presenting the user with all the information all of the time, which was overwhelming. The move towards Contextual User Interfaces is really about building user interfaces that respond to the way that users interact with them – and ideally to the individual user him/herself.
As hardware and processors have become more powerful we have become better able to pre-process and analyse information for users in order to give them exactly what they need when they need it.
This transition towards being context aware isn’t something that happened overnight, it’s happened gradually over time as technologies have matured and improved to allow us to do things that weren’t necessarily possible before. One often touted example of this is the Spell Checker, I remember when you had to explicitly invoke the spell checker in Microsoft Word, whereas now its done automatically in the background as you type. So I do wonder how long it will be before processing becomes cheap enough for us to process entire databases for users in order to derive better context.
But one of the stumbling blocks is that whilst we can derive or assume some context within an individual application we still don’t have the tool’s to computationally describe and communicate context where reasoning and inference is distributed. Why is that important? well to me my context, as an individual, is in some ways predictable and in others it’s highly temporal. In an ideal world there would be a way to describe who I am, what my interests are in general, but also what also what my interests are at a given point in time. If we could formalise that description, using an standardised ontology, then we could provide that ontology as an input into any application we used. That’s where a lot of the work that my friend Alan has been doing has been focussed, and it’s also one of my areas of interest.
That’s why Alex’ post was so wonderful, it resonates with an articulates many of the things that I’ve been thinking about for a while.