Building Anticipatory Software
Recently, I have devoted many cycles to the question, 'What is 'Personalized Software'?
In Douglas Hofstader's Gödel, Escher, Bach: An Eternal Golden Braid,I came across this self-evident but illuminating passage,
"If you punch "1" into an adding machine, and then add 1 to it, and then add 1 again, and again, and again, and continue doing so for hours and hours, the machine will never learn to anicipate you, and do it itself, although any person would pick up the repetitive behavior very quickly. Or, to take a silly example, a car will never pick up the idea, no matter how much or how well it isdriven, that it is supposed to avoid other cars and obstacles on the road; and it will never learn even the most frequently traveled routes of its owner."
Should anticipation (or adaptiveness or intuition) be considered a P0 or P1 requirement for the personalized "machines" we are creating? In other words, can we assume that you will be just as happy with a dumb car whose seats and controls revert from [Driver1_Setting] to [Driver2_Setting] when you approach it as with a smart car whose seats and controls adapt intelligently to your preferences (say, body temperature and back angle) and make incremental adjustments accordingly, over time?
Comments
Anonymous
January 31, 2005
Once again I'll haul out the trusty example of the much-maligned Microsoft Bob. Whatever people think about the UI of that product, there was some cool stuff under the covers. One such thing was agent technology that could, exactly as you suggest, watch you and suggest a shortcut. One can easily imagine it being extended to not only suggest things, but offer to do them for you. FWIW, this technology is what ended up in Office as Clippy et al. Turns out, haha, people don't really like the computer telling them things as they work.Anonymous
January 31, 2005
Hi (Note: Newbie to these ideas),
It occurs to me that the problem with predicting such things is that, as in real life, that sometimes your predications will be wrong.
Predictive systems must take this into account and only be used for situations where failure is not a problem.
The next interesting thing is how do you communicate failure to the system in an intuitive manner? You can't keep asking 'Is it OK if...?'
Hmm...interesting...I guess we do simple things like this now, for example remembering preferences the next time someone uses an app...
Best regards
SteveAnonymous
February 01, 2005
"turns out, haha, people don't really like the computer telling them things as they work"
Aye, ther's the rub. No matter how good an idea it seems at first, people react unpredictably. The car is a good example. How cool would it be to have a car that picks the best route and re-routes around traffic? The problem is that people are near obsessively attached to their favorite shortcuts.
UI is the same. Every time that IE changes how it moves the focus around, or automatically assigns focus when the page loads, it disrupts my work patterns. It doesn't matter how good the reason behind the change or how minor. It leaves me grumbling about the annoying product.Anonymous
February 01, 2005
The comment has been removedAnonymous
February 01, 2005
Finding a solution that will work for everyone is next to impossible. Most people hated clippy but if it had been more like a gentle tap on the sholder, rather than a nag from the wife, then I would have liked it.
What I really need is an alternate brain. Something that will silently sit and watch my every move. Then when I'm ready, I can say, "Hey, I'm sick of driving home, you take over, but we need to get some milk on the way", or "Remember where we parked!", "Please remind me of our wedding aniversiry in time to order some roses".
You could call this a brainPod (tm)(c) (:-})Anonymous
December 29, 2007
PingBack from http://cars.oneadayvitamin.info/?p=402Anonymous
May 31, 2009
PingBack from http://patiochairsite.info/story.php?id=2149Anonymous
June 18, 2009
PingBack from http://fancyporchswing.info/story.php?id=24