Anticipatory computing – the buzz is growing louder but how much do we need it? Despite the ubiquitous mobile device and it’s plethora of tools for building to-do lists, recording memos, managing our appointments and alerting us about all manner of anniversaries, there still seems to be an emerging market for helping us remember even less ourselves.
Thankfully, in addition to that pressing concern, there are more meaningful ways in which predictive computing is being used in mobile devices. Primarily, these tools tend to focus on helping us manage our ever-complex, time-poor lives by personalizing our information streams and thereby uncluttering the process of finding that most relevant of data morsels. In this article I’ll take a quick poke around to see what’s been happening in this space.
“It is one thing to use computers as a tool, quite another to let them do your thinking for you.” – Tom Clancy
One much-touted technology for anticipating our needs is the intelligent thermostat. The Nest product, for example, is a Learning Thermostat. Nice technology, swish design but compelling need? I’m not so sure. I don’t know about you, but my antiquated, offline version seems to cope pretty well set at the same temperature all year round. I don’t forget to turn it off [c’mon – there’s just one switch] and even if that were to happen, the ensuing catastrophe would merely result in eliminating that surge of air-conditioning needed to re-establish my desired environment upon arrival. I grant you that I might not enjoy a personalized automated schedule of favourite temperature patterns but, personally, I have found the complexities of the up and down temperature controls fairly easy to master. In any case, it’s a thermostat – what passes for comfortable doesn’t change much; just how sentient does it need to be?
It gets worse. How about a washer/dryer that hooks into your ‘intelligent home’, understands when you are in the building and anticipates your need for chill-out time by switching to ‘quiet mode’? What the? Why not just have it work in quiet mode all the time? Why would you choose to annoy your neighbours or pets by selecting the ‘noisy’ mode? Why does it even have two modes? Again, my antiquated machine only has one mode but, luckily, it is badged as ‘ultra-silent’ – nice touch. Moreover, it comes equipped with a simple timer for those desperate occasions when I really must have it running while I am out of the building.
And then there’s an interesting app on the periphery of anticipatory computing called ‘Humin’. This app augments the traditional address-book functionality with geo-located, context-aware, meta-data that is both retrieved from the Internet and supplemented by the user. Together with hooks into social-media platforms, this paradigm facilitates a variety of ways to find what you need that transcend the, apparently outmoded, alphabetical ways of olde. Basically, ‘Humin’ anticipates any tendency you might have to forget names and the last time you met someone and factors in such things as frequency of contact and current location when handling this tricky task. This type of device intelligence is being hailed in some quarters as a long overdue UX revolution. There’s no doubt that this is innovative stuff that will certainly have some useful spin-offs. It is, however, adding to the ever-expanding arsenal of tools that steer us down the road of outsourcing our brain entirely and replacing it with a total dependency our device(s).
Of course there are more interesting and, in my mind, more useful applications for anticipatory computing. Search engines and predictive text systems all attempt to anticipate our next move and we’ve come some way since they were first available. How about GoogleNow for example? GoogleNow’s handling of appointments, traffic congestion and delayed flights is the type of integrated predictive resource that can really make a difference. Notably, in this case we are not trying to compensate for acute forgetfulness brought on by cognitive overload. Rather, this technology is more focused on contextual augmentation and is a field that will surely continue to grow as all of us to grapple with ways to manage our ever-increasing information overload.
An innovative approach to this was introduced by the MindMeld app that used voice recognition to serve up context-sensitive information pulled from the Internet (in addition to your social media accounts). Importantly, it was based on ‘continuous predictive modeling’ with plans to anticipate what you might need in the next ten seconds based on what it heard in the last ten minutes. The app does not appear to be available any more as the creators, Expect Labs, have refocused their business to provide access to customizable voice interfaces through their MindMeld API instead. With both Samsung and Google among its investors and a successful $13M Series A funding round in December 2014, Expect Labs seem well-positioned to enable significant advancements in this space. Maybe we’ll see someone make good on their original plans for contextual data feeds based on our conversations.
Whatever your views on contextual or anticipatory computing it will almost certainly continue to be a prime target for innovation in the years to come. Our always-connected lifestyle has swamped us with information and we struggle to separate the wheat from the chaff. Sadly, it would appear that our cognitive ability has been so overwhelmed that we cannot remember to turn off the air-con or even who we had dinner with last week…
“The difference between technology and slavery is that slaves are fully aware that they are not free.” – Nassim Nicholas Taleb