Working with a computer hands-free using the Nouse Perceptual Vision Interface

  1. (PDF, 725 KB)
AuthorSearch for: ; Search for: ; Search for:
ConferenceInternational Workshop on Video Processing and Recognition (VideoRec'07), May 28-30, 2007., Montréal, Québec, Canada
AbstractNormal work with a computer implies being able to perform the following three computer control tasks: 1) pointing , 2) clicking, and 3) typing. Many attempts have been made to make it possible to perform these tasks hands-free using a video image of the user as input. Nevertherless, rehabilitation center practitioners agree that no marketable solution making vision-based hands-free computer control a commonplace reality for disabled users has been produced as of yet. as reported by rehabilitation center practitioners, no marketable solution making vision-based hands-free computer control a commonplace reality for disabled users has been produced as of yet. Here we present the Nouse Perceptual Vision Interface (Nouse PVI) that is hoped to finally offer a solution to a long-awaited dream of many disabled users. Evolved from the original Nouse 'Nose as Mouse' concept and currently under testing with EBRI 1, Nouse PVI has several unique features that make it preferable to other hands-free vision-based computer input alternatives. First, its original idea of using the nose tip as a single reference point to control a computer has been confirmed to be very convenient for disabled users. For them the nose literally becomes a new 'finger' which they can use to write words, move a cursor on screen, click or type. Being able to track the nose tip with subpixel precision within a wide range of head motion, makes performing all control tasks possible. Its second main feature is a feedback-providing mechanism that is implemented using a concept of Perceptual Nouse Cursor (Nousor) which creates an invisible link between the computer and user and which is very important for control as it allows the user to adjust his/her head motion so that the computer can better interpret them.Finally, there are a number of design solutions related specifically tailored for vision-based data entry using small range head motion such as motion codes(NouseCode), a motion-based virtual keyboard (Nouse-Board and NousePad) and a word-by-word letter drawing tool (NouseChalk). While presenting the demonstrations of these innovative tools, we also address the issue of the user's ability and readiness to work with a computer in the brand new way - i.e. hands-free. The problem is that a user has to understand that it is not entirely the responsibility of a computer to understand what one wants, but it's also the responsibility of the user to make sure that the computer understands what the user motions mean. Just as a conventional computer user cannot move the cursor on the screen without first putting his or her hand on the mouse, a perceptual interface user cannot work with a computer until he or she 'connects' to it. That is, the computer and user must work as a team for the best control results to be achieved. This presentation therefore is designed to serve both as a guide to those developing vision-based input devices and as a tutorial for those who will be using them.
Publication date
AffiliationNRC Institute for Information Technology; National Research Council Canada
Peer reviewedNo
NRC number49354
NPARC number5764869
Export citationExport as RIS
Report a correctionReport a correction
Record identifierb8c82abc-28c1-492c-8b73-0afe3cd57e32
Record created2009-03-29
Record modified2016-05-09
Bookmark and share
  • Share this page with Facebook (Opens in a new window)
  • Share this page with Twitter (Opens in a new window)
  • Share this page with Google+ (Opens in a new window)
  • Share this page with Delicious (Opens in a new window)
Date modified: