When it was first introduced in January 2007, iPhone’s Multi-Touch interface was a real breakthrough in operation of small portable devices.
Now, if the ideas sketched out in a new Apple patent application “Multitouch data fusion” are implemented, we may soon see another qualitative leap in the usability of user interfaces in various computing devices.
Multi-Touch (MT) interface is perfectly good for many of the device control and operation functions. But on-screen 2D object manipulation has some inherent limitations too, and there are quite a few actions that can be done better by other input means. And electronic devices that use MT, usually have quite a few of these other input means. That can include cameras, microphones, accelerometers, biometric sensors, temperature sensors, etc;.
What Apple is proposing in it’s patent app, is to fuse these secondary input means with Multi-Touch to improve the overall user interface. And it gives quite a few examples of how to do that.
E.g. fusing voice input and multitouch can significantly improve image editor capabilities:
Some of the image manipulation actions - like resize, move, rotate - are handled very well with multitouch. But other tasks, like changing object color, inserting text, etc; are much easier accomplished by using voice input. By fusing these two inputs together - you can make all the process much faster and easier. Now you manipulate on screen objects with you fingers, and then just tell the object to “change color” or “insert text”.
Other MT data fusion examples include:
Combination of MT and motion sensor data. It can be used in iPhone gaming applications where motion data is combined with MT to control characters. It can also be applied to improve multi-touch sensitivity on your iPhone while on the move, and filter out the erroneous gestures occurring due to vibration.
Force sensors with MT can be used to interpret the firmness of touch gesture, and initiate functions depending on the firmness of touch.
One of the most significant improvements to multi-touch can be achieved by fusing MT with visual input from device camera. Especially on your Macbooks and iMacs:
Obvious improvements can come with the possibility to clearly identify each finger and assign different function to them. E.g. each finger can be assigned different color in painting application.
Camera can also easily track your eye movements and provide the “gaze vector data” to the device. Fusing gaze vector data with multi-touch gestures can be used for selecting active windows on screen, control of the cursor movements and other operations we currently use a mouse for.
When Multi-touch, finger identification and gaze vector data are combined it could be used to create an input device that may spell a doom for our usual keyboard and mouse very soon.
Visual data can also be used to interpret your facial expressions. If you get stuck trying to perform some task the frustration may show-up on your face. Your device may understand that and provide some help:
…let’s say that the user is trying to scroll through a document using a two-finger vertical movement (gesture). Scrolling, however, is not working for him because he is unknowingly touching the surface with three fingers instead of the required two. He becomes frustrated with the “failure” of the device. However, in this case, the system recognizes the frustration and upon analyzing the multi-touch movement data concludes he is trying to scroll with three fingers. At this point, the device could bring the extra-finger problem to the attention of the user or it could decide to ignore the extra finger and commence scrolling. Subsequent emotional data via facial recognition would confirm to the system that the correct remedial action was taken.
These are just a few of the possibilities described in patent app. Some of them, like facial expression/MT combination may be pretty far off. But many others, like voice input/MT, motion sensors/Multi-Touch, visual data/MT fusion are technically feasible already.Source...
No comments:
Post a Comment