Those of us who care about the user experience of our products really sweat about the details. Is that box text box big enough, is it the right font, is the contrast strong enough, is the layout self-explanatory? The list goes on and on.
But, when was the last time that you gave much thought to how your user was going to actually, physically, enter data? I’m just the same. Having learnt to touch type 30 years ago, I do so without thinking, until I meet up with a touch screen keyboard, then I struggle like a newbie trying to text.
Recently I met one of the guys behind SwiftKey which promises to be a solution to all that touch screen fumbling, at least on Android phones. SwiftKey not only corrects your typing, it figures out what you are likely to type next and offers it to you. Try it!
Just in case you don’t think that the ability to enter text accurately is important, just take a look at the Damn You, Autocorrect website: deep public embarrassment.
The history of text input has been a very patchy one. The layout of the original QWERTY keyboard by Christopher Latham Sholes in 1873 had more to do with the limits of a mechanical typewriter than ergonomics. The placing of the final “R” was a marketing move by Remington so that their salesmen could demonstrate the typewriter by pecking out the word “TYPE WRITER” from one row. I guess that says something about the salesmen’s’ limitations too.
Apologies to German (QWETZ) and French (AZERTY) readers at this point.
The keyboard is itself very limiting and many people have attempted to find better solutions.
In 1892 Lois Braille invented the Braille writing system, and a chorded keyboard device was created by Frank Hall to punch the holes in paper that define the characters. The idea of pressing more than one button at a time was taken up by Douglas Engelbart in 1968, a development of which (the Microwriter) I tried to learn to use on my early BBC Computer. It is such an appealing idea: an input device that you could hold in one hand, use without looking at it, even whilst walking and talking. Almost natural!
Why bother with keys at all, why not use handwriting recognition? Well, to be frank, my writing is pretty illegible even to me. I’ve tried handwriting recognition over the years and by the time I’ve tried to teach the device to decipher my scrawl, and slowed down to a pace it can handle, I might as well have set the word in cold type.
Researchers at the Human-Computer Interaction Institute at the School of Computer Science, Carnegie Mellon University have come up with an interesting development of handwriting recognition for some people with Muscular Dystrophy. EdgeWrite works on an adapted alphabet, the letters of which you can create by moving a stylus around a square. That makes it much easier to achieve when you have limited hand control, or would even allow you to do it with a joystick.
Other inventions in the area allow you to “swipe” across the touch keyboard: Swype. Another lets you use spiralling motions: 8pen. Microsoft patented a gesture type way of using a phone touch keyboard.
But my personal favourite has to be Dasher from the Inference Group, Cavendish Laboratory, Cambridge. With Dasher you steer your way through a constant flow of possible letters and words. It feels like trying to write by (failing) to control a 1970s console game. Try it for yourself.
But what has any of this got to do with your software product or website? All of this is a firm poke in the ribs to remind us that adding text can be a pain and that we should think very carefully about how and when we ask users to do so. It is even more of a pain when we expect users to navigate between text input fields. Have you watched anyone typing, mousing, typing? Eyes off the screen to find the mouse, eyes back to navigate, eyes off again to get your hands onto the home keys, etc.
We probably can’t help our users with a better keyboard, but we can help them out by allowing them to navigate with the keyboard and enter information with the mouse.