US20130321279A1 - Method of capturing system input by relative finger positioning - Google Patents

Method of capturing system input by relative finger positioning Download PDF

Info

Publication number
US20130321279A1
US20130321279A1 US13/904,029 US201313904029A US2013321279A1 US 20130321279 A1 US20130321279 A1 US 20130321279A1 US 201313904029 A US201313904029 A US 201313904029A US 2013321279 A1 US2013321279 A1 US 2013321279A1
Authority
US
United States
Prior art keywords
user
finger
engine
key
fingers
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/904,029
Other versions
US9122395B2 (en
Inventor
Garett Engle
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US13/904,029 priority Critical patent/US9122395B2/en
Publication of US20130321279A1 publication Critical patent/US20130321279A1/en
Priority to US14/745,921 priority patent/US9361023B2/en
Application granted granted Critical
Publication of US9122395B2 publication Critical patent/US9122395B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04886Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/0304Detection arrangements using opto-electronic means

Definitions

  • the user interface devices such as displays and keyboard input devices have set limitations on how small the devices can become.
  • Touch screens have been used to provide combined keypad and video display but, the size of the displayed keys is limited by the requirement of the user to be able to select and the system to be able to distinguish between key presses.
  • the key presses are determined by where a given finger is relative to the input device (e.g. how close to the desired key the finger pressed). This creates a hard dependency to always conform to the area, limits and behavior of the virtual device to type accurately.
  • Embodiments of the present invention operate to capture input by relative finger positioning. Such embodiments alleviate the shortcomings in the market.
  • various embodiments utilize the relative position of each finger within a 3D space to determine the key being pressed by a user and, thus eliminating any dependencies upon virtual keyboards of any type and the limitations they impose.
  • a reader is utilized to detect motion of a user's fingers when a user mimics a typing motion.
  • the system can be used to define various key press states for particular users and then monitor the motion of fingers to detect when a key state is entered. The system can then provide the detected key state as input to a system expecting the data input.
  • FIG. 1 is a conceptual drawing illustrating an exemplary embodiment of a 3D reader and engine utilized to identify intended keystrokes of a user.
  • FIG. 2 is a view of current state of the art technology for entering data using a keyboard.
  • FIG. 3 is a conceptual diagram illustrating an environment in which embodiments of the 3D reader and engine can be implemented.
  • FIG. 4 is a diagram depicting the typical positioning of a user's hands over a keyboard.
  • FIG. 5 is a diagram illustrating the typical positioning of a user's left and right hand on a keyboard with emphasis on the position of the finger tips.
  • FIG. 6 is a layout diagram illustrating detection of the fingers in a typing state to strike the “u” key.
  • FIG. 7 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “n” key.
  • FIG. 8 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “4” key.
  • FIG. 9 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “z” key.
  • FIG. 10 is a flow diagram illustrating the actions that an exemplary engine may take in processing the finger motion of a user when producing a key press.
  • FIG. 11 is a functional block diagram of the components of an exemplary environment or system or sub-system implementing various aspects of embodiments of the reader and engine or, in which the reader and engine may be incorporated.
  • the present disclosure presents embodiments of a simulated or virtual keyboard that does not require a physical, projected, or other representation of a keyboard but rather, simply tracks the location, position and movement of the subject's fingers.
  • the various embodiments include a 3D reader that is configured to track the location and position of a subject's fingers, as well as an engine configured to process the 3D data to determine keystroke operations intended by the movement of the fingers. It will be appreciated that although the various embodiments will be described as a 3D reader and an engine that tracks finger motions and identifies intended keystrokes, that the present invention is not limited to such embodiments or applications. Those skilled in the art will appreciate other applications in which the various embodiments can be incorporated.
  • a few non-limiting examples include tracking finger and hand movements during surgery, typing, data entry, piano playing, guitar playing, etc., tracking tool motions, tracking writing instrument movements, tracking motion of a paint brush by an artist, tracking finger and hand movements for sign language, tracking lip, mouth and facial movements to identify speech elements, track motion of other body parts coupled to an interpretation engine to assist handicapped individuals with limited motion to provide control of medical devices and/or computer inputs, etc.
  • embodiments of a 3D reader generate data representing the motions of fingers and, an engine to processes the 3D data to determine keystroke input.
  • FIG. 1 is a conceptual drawing illustrating an exemplary embodiment of a 3D reader and engine utilized to identify intended keystrokes of a user.
  • a 3D reader 100 When a user wishes to type input into a system, the user places their left hand 120 and right hand 130 in front of a 3D reader 100 .
  • the reader 100 is able to determine the location and position of each finger of each hand of the user within a 3D space 150 .
  • the 3D reader is able to detect motion of the fingers with sufficient fidelity to clearly detect that the user wishes to “press a key” when one or more fingers move.
  • Press a key is in quotes because physical keys (or virtual keys projected in front of the user or within a display) may not be presented to the user (nor are they needed). Rather, the 3D reader operates to monitor or track the actual physical actions that a user would perform in the pressing of an actual key or virtual key, even in the absent of such actual or virtual keyboard.
  • FIG. 2 is a view of current state of the art technology for entering data using a keyboard.
  • a computer system 200 includes either a physical or virtual keyboard 210 that is located proximate to the computer system 200 , typically on the surface of a table 220 directly in front of the computer system 200 .
  • a user 220 is positioned in front of the computer system 200 with the users hands 230 resting on or over the keyboard or virtual keyboard 210 .
  • FIG. 3 is a conceptual diagram illustrating an environment in which embodiments of the 3D reader and engine can be implemented.
  • a computer system 300 equipped with the 3D reader and engine is being utilized by a user 320 .
  • the exact same motions that the user would take to type on a physical keyboard are taken when entering data with the 3D reader.
  • the illustrated embodiment is described as a computer system 300 that includes the 3D reader and engine, the 3D reader and/or the engine can be internal or external to the computer system 300 .
  • the 3D reader may be device that is external to the computer system 300 and that communicates with an engine that is located within the computer system 300 (i.e., is a software application operating by the processor within the computer system 300 ).
  • motion data is collected by the 3D reader and then relayed either wirelessly or over a physical connection to the engine within the computer system 300 .
  • the engine then processes the motion data to determine intended key presses and then interfaces with other applications on the computer system 300 to provide indicia of the key presses.
  • the 3D reader and the engine may be external to the computer system 300 .
  • the external 3D reader tracks finger motions and then relays 3D motion data to the external engine.
  • the engine can then process the data and determine intended key presses and then communicate the key presses to the computer system 300 through the standard keyboard port, such as USB, RS232 or any other physical port, as well as wireless ports (i.e., Blue Tooth, WIFI, or other wireless technologies).
  • the external 3d reader and engine simply appear as a keyboard input to the computer system 300 .
  • the 3D reader may be internal to the computer system 300 and then the 3D reader and or computer system 300 communicates the motion data to an external engine.
  • the engine can then process the data and determine intended key presses and then communicate the key presses to the computer system 300 through the standard keyboard port, such as USB, RS232 or any other physical port, as well as wireless ports (i.e., Blue Tooth, WIFI, or other wireless technologies).
  • the external 3d reader and engine simply appear as a keyboard input to the computer system 300 .
  • the 3D reader and engine may be internal to the computer system 300 .
  • the 3D reader detects finger motions, communicates with the engine which then provides key board press data to the computer system 300 .
  • the engine may be internal to the computer system 300 .
  • other variations are also anticipated, such as connecting the reader and/or engine to a system over a wired or wireless network, etc.
  • the provision of the keyboard press data derived by the engine can be presented in a variety of techniques depending on the systems in which the 3D reader and engine are incorporated.
  • the key press information can be conveyed through the basic input and output system (BIOS) of the computer system, directly into an application expecting key presses, etc.
  • BIOS basic input and output system
  • Those skilled in the relevant art will appreciate the various techniques for integrating the 3D reader and engine with the hardware and software of the computer system 300 or any other device with which the 3D reader and engine are used.
  • 3D reader and engine can be implemented in software, hardware, firmware as well as any combination of the same.
  • the 3D reader 350 monitors the motion hand and finger 330 motion of a user 320 when the hands 330 are within a 3D space 340 located proximate to the computer system 300 . It will be appreciated that in the embodiments that allow the 3D reader to be external to the computer system 300 , the 3D space 340 can be located anywhere and does not have to be proximate to the computer system 300 .
  • the 3D reader 350 includes a lens 355 that is used to obtain video data of the hand and finger motions. This motion data is then provided directly to the engine 360 or, the 3D reader 350 may conduct some level of pre-processing prior to communicating the data to the engine 360 , such as filtering, compressing, etc.
  • the engine 360 then analyzes the video data and identifies key presses associated with the motion data.
  • the key press information is then conveyed to a processor 370 within the computer system 300 through any of the available techniques.
  • FIG. 4 is a diagram depicting the typical positioning of a user's hands over a keyboard.
  • the users left hand 120 and right hand 130 are shown as being positioned over a keyboard 400 for conventional typing. It will be appreciated, as presented further below, that non-conventional postures may also be utilized in various embodiments and the system can be calibrated for any keyboard structure, typing posture, etc.
  • FIG. 5 is a diagram illustrating the typical positioning of a user's left and right hand on a keyboard with emphasis on the position of the finger tips.
  • the reader (not shown), is configured to track the position of each finger by monitoring the location, position and movement of the tips of each finger (left pinky 550 L, right pinky 550 R, left “ring-finger” 560 L, right “ring-finger” 560 R, left middle finger 570 L, right middle finger 570 R, left index finger 580 L, right index finger 580 R, left thumb 590 L and right thumb 590 R) overlaid onto a keyboard 400 that resides in a virtual model internal to the reader.
  • a keyboard or virtual keyboard external to the computer system is not required, although the various embodiments are not limited to operating without such a keyboard, virtual keyboard or any other representation of a keyboard.
  • each finger tip ( 550 L, 550 R, 560 L, 560 R, 570 L, 570 R, 580 L, 580 R, 590 L and 590 R) collectively make a unique position-state.
  • the “default” state i.e. the default position for touch-typing
  • this unique default-state is being demonstrated visually in only two dimensions (via the two dimensional location of the finger tips) whereas the reader may have access to multi-dimensions (3D space, infra-red, time etc. . . . ) to model a unique multi-dimensional default-state.
  • a video camera or motion sensor may be utilized to track motion of fingers in 3D, while an infra-red sensor can also track motion based on heat signatures of muscles etc., and the timing, such as duration of movement, etc., may also be used as motion data.
  • FIG. 5 , FIG. 6 , FIG. 7 , FIG. 8 and FIG. 9 will continue to be described utilizing only a two-dimensional state for the simplicity of demonstrating the logic of the system.
  • FIG. 6 is a layout diagram illustrating detection of the fingers in a typing state to strike the “u” key.
  • FIG. 6 is a top view of the user's finger positions ( 550 L, 550 R, 560 L, 560 R, 570 L, 570 R, 580 L, 580 R, 590 L and 590 R) when the user's right index finger 580 R is stretched out intending to actuate the “u” key. Because the right index finger position 580 R is in a different location from the default position illustrated in FIG. 5 , this constitutes a new state. In this case the collective finger tips positions of the right hand ( 550 R, 560 R, 570 R, 580 R and 590 R) make up the “u”-key-state.
  • FIG. 7 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “n” key. Similar to FIG. 6 , above FIG. 7 displays the top view of the user's finger positions ( 550 L, 550 R, 560 L, 560 R, 570 L, 570 R, 580 L, 580 R, 590 L and 590 R) when the user's right index finger is bent down intending to type the “n” key. Because the right index finger position 580 R is in a different location this constitutes a new state. In this case, the collective finger tips positions of the right hand ( 550 R, 560 R, 570 R, 580 R and 590 R) make up the “n”-key-state.
  • FIG. 8 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “4” key.
  • FIG. 8 displays the top view of the user's finger positions ( 550 L, 550 R, 560 L, 560 R, 570 L, 570 R, 580 L, 580 R, 590 L and 590 R) when the user's left index finger is stretched out intending to type the “4” key. Because the left index finger position 580 L is in a different location this constitutes a new state. In this case the collective finger tips positions of the left hand ( 550 L, 560 L, 570 L, 580 L and 590 L) make up the “4”-key-state.
  • FIG. 9 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “z” key.
  • FIG. 9 displays the top view of the user's finger positions ( 550 L, 550 R, 560 L, 560 R, 570 L, 570 R, 580 L, 580 R, 590 L and 590 R) when the user's left index finger is bent down intending to type the “z” key. Because the left index finger position 550 L is in a different location this constitutes a new state. In this case the collective finger tips positions of the left hand ( 550 L, 560 L, 570 L, 580 L and 590 L) make up the “z”-key-state.
  • the various embodiments may define key states based on the position of fingers, location of fingers, the position of fingers relative to the other fingers and/or a surface or other item, the position of the hand, looking at the entire fingers or just a certain portion of the fingers, or the like.
  • FIG. 10 is a flow diagram illustrating the actions that an exemplary engine may take in processing the finger motion of a user when producing a key press. It will be appreciated that all of the functions or actions described in conjunction with FIG. 10 may not be necessary in all embodiments of the engine and, the separation or delineation of the functions or actions represented in FIG. 10 is only for descriptive purposes. As such, the illustrated functions may correspond with particular modules or system elements but, in other embodiments functions or actions can be combined or bifurcated into other modules or systems. Further, although the actions describe in connection with FIG. 10 are described as the engine, it will be appreciated that some or all of the actions may be performed by the 3D reader and/or the computer system.
  • the engine 1000 operates by invoking or conducting a calibration process 1002 .
  • a calibration process can be performed to more accurately process the various key presses and identify the various key states.
  • a calibration process is not required and the system may default to a particular state definition or load a predefined profile or model.
  • key press states for each possible key press and key press combination may be pre-programmed into the system and the system can begin tracking key presses immediately based on the initial settings.
  • the calibration process can include instructing a user to move his or her fingers to a particular position to be associated with a particular key press, and then recording data associated with the detected position. In other embodiments, a user may be instructed to perform this process a number of times, such as X times for each of the key presses to be defined.
  • the position data can represent and exact position or a range of positions. The range of positions can be adjusted to be larger or smaller depending on the accuracy of the reader.
  • the calibration can simply require the user to put his or her hands into the rest or ready position for a period of time, and then, based on the user finger locations, the system can then generate key state positions for each of the possible key presses or key press combinations.
  • the calibration process may require the user to make certain motions, such as a subset of the possible key presses, and then generate the remaining key states based on this information.
  • the calibration action may require the user to make motions for typing each of the potential key presses.
  • a calibration process can be useful for defining any keyboard type of structure and any typing style. For instance, if a user desires to have a keyboard that is not based on the standard QWERTY key layout, the engine can accommodate the non-standard keyboard layout by defining the key presses through the calibration process or obtaining pre-created profiles. Similarly, if a user does not type using the common typing techniques, the engine may still be able to detect key press states. For example, if the user types by what is referred to two finger typing (find and poke) the calibration process can still be utilized to identify the key states for fingers moving in this manner. As another example, users that type on handheld devices may type using only their thumbs. In any of these as well as other styles, the 3D reader can detect various states for each of the potential key presses and the calibration process can be used to define the key states for each of the possible key presses.
  • various keyboard layouts and typing techniques can be pre-loaded into the system thus bypassing any requirement for the calibration process 1002 .
  • different profiles can be stored in a system and loaded into the system depending on various requirements. For instance, a different profile can be constructed for each user and then loaded into the system when that particular user has logged in or has otherwise identified him or herself to the system.
  • the engine enters into a loop that operates to look for and detect a state change 1004 caused by motion of the fingers. For instance, if the fingers are in the “at rest” or “ready” state, any movement of the fingers that causes the position of the figures to no longer qualify as the ready state can be interpreted as a state change. At this point, the engine then attempts to evaluate the position of the fingers to identify a new state 1006 . If a state change is not detected, the engine simply continues looping or delaying until the fingers are moved a sufficient amount to trigger a state change.
  • the system may operate to make hundreds of readings per second and as such, many readings can be made before a state change is detected. Further, readings can be averaged to provide a level of hysteresis to prevent false detections and/or to provide a “debounce” type function.
  • the engine determines if the position of the fingers in the new state is a valid key state. If the position of the fingers does not match a valid key state 1006 , then the engine may look to see if the new state is a control state 1020 . Otherwise, if the new state is a valid key state, the engine operates to enter the key stroke into the system 1008 . Entering the key stroke into the system 1008 can be performed in a variety of manners, as previously presented. For instance, entering the key into the system may result in displaying a character associated with the key on a display device. In general, the key stroke is presented to the system through any of a variety of interfaces.
  • the system and/or user can perform validation on the entered key stroke to determine if the correct key was entered or, if the system has falsely detected the wrong key state. For instance, the user can easily see the character that was entered by visually examining a display if the characters are being rendered onto a display.
  • the system can automatically detect errors or suspicious key press entries. For instance, an application that is expecting input can perform a level of validation of the data. An error can be triggered by the system if the application is expecting a number but receives a letter, if the characters form a misspelled word, or other data input knowledge rules can be applied to detect erroneous key press entries or suspicious key press entries.
  • the engine can then invoke an adjustment of the internal model or profile 1012 .
  • the engine may do this automatically or, the user can be prompted to take a particular action to invoke an adjustment of the internal model. For instance, the user may receive a visual and/or audible prompt indicating that a key press entry error has occurred.
  • the system can employ the use of several actions for adjusting the internal model or profile in response to an key press entry error. The adjustments may be made with regards to a specific user, the specific internal keyboard model or both.
  • the variation tolerance for a particular key state may be reduced or tightened to avoid a false detection Likewise, if detected errors are at a very low percentage, such as 10%, 1% or 0.01% as non-limiting examples, the tolerance for the various key states may be increased or loosened so as to minimize processing required to detect new key states. Thus, in some embodiments, the tolerances for the key states may dynamically change in response to detected error levels.
  • the engine returns to the state change detection loop 1004 .
  • the user may then simply continue to move his or her fingers into the next desired key-state by mimicking the physical actions of pressing a key. In this manner, the user effectively types whatever he or she wants on any surface, on the user's lap, or even thin air, as long as the reader can identify and track the user's fingers.
  • control state may include raising all fingers, raising all fingers on the left hand, raising all fingers on the right hand, moving two fingers forward or backward, making a swiping gesture, etc.
  • the engine detects the state change associated with the control state and determines that the newly detected state is not a valid key state 1006 , the engine then operates to determine if the new state is a valid control state 1020 .
  • the engine If the new state is not a valid control state 1020 , the engine returns to the new state detection loop 1004 . However, if the new state is a valid control state 1020 , the engine operates to process the control state.
  • control states can be defined in various embodiments of the system, and similar to the key press definitions, the control states can be loaded into the system through a calibration process or by being previously loaded into the system as a profile or model.
  • some control states may include, key press entry error, erase back to the first space, cause a caps lock state to be entered, cause a numbers lock to be entered, invoke a reset or force recalibration, etc.
  • the control state will be processed by one of three available modules: a recalibrate module 1022 , a process control module 1024 and an entry error module 1026 .
  • control state is determined to be a request to recalibrate
  • recalibrate module 1022 causes the engine to return to the calibration action 1002 .
  • the system may process the entry error control state 1026 by removing the key from the system (i.e., erase the character from the display) and make adjustments to the internal module by continuing the process at the adjust internal module action 1012 . After this occurs, processing returns to the state change detection loop 1004 where the user will then be able to try to enter the key press again by reproducing the intended key-pressing action.
  • a general process control module 1024 processes the control state and then the process returns to the state change detection loop 1004 .
  • the process control module can perform functions such as, toggling the state of a caps lock, changing which profile or module is loaded, pausing the state detection loop, etc.
  • the 3D reader and engine can be configured in a variety of manners and different embodiments may include differing functionalities. A few non-limiting examples of variations that may be implemented into different embodiments are reviewed below.
  • the number of fingers required in determining a unique key state or control state may vary. For instance, as previously mentioned, embodiments may be implemented to support two finger typing. Other embodiments may also be used for users that have missing digits. It will also be appreciated that the calibration process can accommodate any such requirements.
  • the number of ways to determine if a key-state has been reached may vary. For instance, some users may be very chaotic in their typing and require a large margin of error. As such, more samples may be required for detecting a valid key state in such circumstances. Further, if a larger number of key entry errors are detected, the system may automatically increase the number of samples required to detect a valid key state change. Similarly, the system may progressively decrease the number of samples until errors are detected.
  • the user may place his or her fingers into a standard default position for typing.
  • the user may not have to have his/her hands within the traditional touch-typing position but may, for example, have both hands in a first to tell the system that the user is ready to type.
  • the user can place his or her hands onto a surface for the 3D reader.
  • an actual surface is not necessary and the user can type in mid air.
  • the hands do not necessarily have to be together.
  • hands may be placed in a vertical orientation, may be separated by considerable distance, the palms can be facing the 3D reader, etc.
  • keyboard detector in which the user mimics keyboard, presses
  • the 3D reader may be utilized to detect American Sign Language, or other sign language hand gestures for various letters and numbers, actually writing motions of a writing instrument forming the letters, reading lip motions when a user speaks the letters or sounds, etc.
  • the user can virtually define any gestures or movements as valid key presses.
  • a key state may not have to be unique to provide value to the system.
  • a lower-case and upper-case key state may be considered the same key state in some embodiments.
  • a given key state may be defined in many possible dimensions.
  • the key state may be defined in multiple three dimensional definitions depending on the angle of view of the hands.
  • key states may be defined in two dimensions as well. Even further, the key states may be identified using cameras, accelerometers, infra-red detectors as well as other sensors and detectors.
  • the engine may process the key state data in a variety of manners.
  • key states may be defined with levels of variation tolerance.
  • the key state for pressing a “u” key may be expanded such that a “u” key press is detected if the right index finger is on the left side, right side, top, bottom or center of an area defining the “u” key zone.
  • the tolerance of the “u” key zone can be increased if there is not a threshold amount of motion detected by the other fingers, such as the middle and ring finger on the right hand, which would be indicative of the “y” key being pressed.
  • FIG. 11 is a functional block diagram of the components of an exemplary environment or system or sub-system implementing various aspects of embodiments of the reader and engine or, in which the reader and engine may be incorporated. It will be appreciated that not all of the components illustrated in FIG. 11 are required in all embodiments of the reader and engine but, each of the components are presented and described in conjunction with FIG. 11 to provide a complete and overall understanding of the components.
  • the environment can include a general computing platform 1100 illustrated as including a processor/memory device 1102 / 1104 that may be integrated with each other or, communicatively connected over a bus or similar interface 1106 .
  • the processor 1102 can be a variety of processor types including microprocessors, micro-controllers, programmable arrays, custom IC's etc.
  • the memory element 1104 may include a variety of structures, including but not limited to RAM, ROM, magnetic media, optical media, bubble memory, FLASH memory, EPROM, EEPROM, etc.
  • the processor 1102 , or other components in the controller may also provide components such as a real-time clock, analog to digital convertors, digital to analog convertors, etc.
  • the processor 1102 also interfaces to a variety of elements including a control/device interface 1112 , a display adapter 1108 , an audio adapter 1110 , and network/device interface 1114 .
  • the control/device interface 1112 provides an interface to external controls or other devices such as the video detector 1130 .
  • Other devices may include motion sensors and other sensors, actuators, drawing heads, nozzles, cartridges, pressure actuators, leading mechanism, drums, step motors, cameras, a keyboard, a mouse, a pin pad, an audio activated device, as well as a variety of the many other available input and output devices or, another computer or processing device or the like.
  • the devices may specifically include a camera and/or infrared sensor for detecting finger or object motion.
  • the camera and/or infrared sensor may also be included internal to the platform 1100 .
  • the display adapter 1108 can be used to present a variety of information onto a display device 1116 , as well as provide other visual aspects of a user interface.
  • An exemplary display device may include an LED display, LCD display, one or more LEDs or other display devices.
  • the audio adapter 1110 interfaces to and drives a sound producing element 1118 , such as a speaker or speaker system, buzzer, bell, etc.
  • the network/interface 1114 may interface to a network 1120 which may be any type of network including, but not limited to the Internet, a global network, a wide area network, a local area network, a wired network, a wireless network or any other network type including hybrids.
  • the controller 1100 can interface to other devices or computing platforms such as one or more servers 1122 and/or third party systems 1124 .
  • a battery or power source provides power for the controller 1100 .
  • each of the verbs, “comprise”, “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements, or parts of the subject or subjects of the verb.
  • unit and “module” are used interchangeably. Anything designated as a unit or module may be a stand-alone unit or a specialized module.
  • a unit or a module may be modular or have modular aspects allowing it to be easily removed and replaced with another similar unit or module.
  • Each unit or module may be any one of, or any combination of, software, hardware, and/or firmware.

Abstract

A reader is utilized to detect motion of a user's fingers when a user mimics a typing motion. The system can be used to define various key press states for particular finger positions and then monitor the motion of fingers to detect when a key state is entered. The system can then provide the detected key state as input to a system expecting the data input.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This is a utility patent application being filed in the United States as a non-provisional application for patent under Title 35 U.S.C. §100 et seq. and 37 C.F.R. §1.53(b) and, claiming the benefit of the prior filing date under Title 35, U.S.C. §119(e) of the U.S. provisional application for patent that was filed on May 29, 2012 and assigned Ser. No. 61/652,826, which application is incorporated herein by reference in its entirety.
  • BACKGROUND
  • As the demands of the portable market has pushed electronic devices smaller and lighter, the user interface devices, such as displays and keyboard input devices have set limitations on how small the devices can become. Touch screens have been used to provide combined keypad and video display but, the size of the displayed keys is limited by the requirement of the user to be able to select and the system to be able to distinguish between key presses.
  • As electronic devices beyond PCs and laptops continue to grow in popularity, there is a need to interact with smart-phones, tablets, TVs, game consoles, etc., more and more. While simple navigation has always been easy on these devices, it is entering data from a keyboard that has been the biggest challenge. Soft keyboards (such those found on smart phones) can be too rigid, small and lack typing feedback. Portable keyboards have to be carried around with the user defeating the purpose and/or convenience of the device.
  • To overcome this problem many ideas are being proposed of projecting a virtual keyboard onto the surface in front of the user (via lasers or light) or having some form of light-weight, touch interface that displays a soft keyboard for data input. These implementations, however, don't solve the root of the problems described above but are, rather, just different manifestations of them.
  • When typing on a virtual keyboard—whether projected onto a surface in front of the user or displayed within a touch-sensitive display—the key presses are determined by where a given finger is relative to the input device (e.g. how close to the desired key the finger pressed). This creates a hard dependency to always conform to the area, limits and behavior of the virtual device to type accurately.
  • Thus, there is a need in the art for a solution to allow portable devices, as well as other electronic type devices to be completely independent of any restrictions for a keyboard entry mechanism.
  • BRIEF SUMMARY
  • Embodiments of the present invention operate to capture input by relative finger positioning. Such embodiments alleviate the shortcomings in the market. In general, various embodiments utilize the relative position of each finger within a 3D space to determine the key being pressed by a user and, thus eliminating any dependencies upon virtual keyboards of any type and the limitations they impose.
  • More specifically, in one embodiment a reader is utilized to detect motion of a user's fingers when a user mimics a typing motion. The system can be used to define various key press states for particular users and then monitor the motion of fingers to detect when a key state is entered. The system can then provide the detected key state as input to a system expecting the data input.
  • These and other embodiments, aspects, functions and variations will be presented in more detail in conjunction with a description of the figures.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
  • FIG. 1 is a conceptual drawing illustrating an exemplary embodiment of a 3D reader and engine utilized to identify intended keystrokes of a user.
  • FIG. 2 is a view of current state of the art technology for entering data using a keyboard.
  • FIG. 3 is a conceptual diagram illustrating an environment in which embodiments of the 3D reader and engine can be implemented.
  • FIG. 4 is a diagram depicting the typical positioning of a user's hands over a keyboard.
  • FIG. 5 is a diagram illustrating the typical positioning of a user's left and right hand on a keyboard with emphasis on the position of the finger tips.
  • FIG. 6 is a layout diagram illustrating detection of the fingers in a typing state to strike the “u” key.
  • FIG. 7 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “n” key.
  • FIG. 8 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “4” key.
  • FIG. 9 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “z” key.
  • FIG. 10 is a flow diagram illustrating the actions that an exemplary engine may take in processing the finger motion of a user when producing a key press.
  • FIG. 11 is a functional block diagram of the components of an exemplary environment or system or sub-system implementing various aspects of embodiments of the reader and engine or, in which the reader and engine may be incorporated.
  • DETAILED DESCRIPTION OF EMBODIMENTS
  • Presented within this disclosure, including the figures, claims and other sections, is various embodiments of a three-dimensional (3D) reader that is able to track the location and position of objects, to derive at a conclusion as to the intended action(s) to be invoked based upon the motion of the objects (such as the changing of the location and position of the objects) and then to invoke such intended actions. Although various embodiments are described, as well as features, aspects, elements, applications and functions of the various embodiments, it will be appreciated that the present invention is not limited to the actually disclosed embodiments but rather, that the disclosure simply provides examples of the embodiments to facilitate the understanding of the various aspects of the invention.
  • More specifically, the present disclosure presents embodiments of a simulated or virtual keyboard that does not require a physical, projected, or other representation of a keyboard but rather, simply tracks the location, position and movement of the subject's fingers. The various embodiments include a 3D reader that is configured to track the location and position of a subject's fingers, as well as an engine configured to process the 3D data to determine keystroke operations intended by the movement of the fingers. It will be appreciated that although the various embodiments will be described as a 3D reader and an engine that tracks finger motions and identifies intended keystrokes, that the present invention is not limited to such embodiments or applications. Those skilled in the art will appreciate other applications in which the various embodiments can be incorporated. A few non-limiting examples include tracking finger and hand movements during surgery, typing, data entry, piano playing, guitar playing, etc., tracking tool motions, tracking writing instrument movements, tracking motion of a paint brush by an artist, tracking finger and hand movements for sign language, tracking lip, mouth and facial movements to identify speech elements, track motion of other body parts coupled to an interpretation engine to assist handicapped individuals with limited motion to provide control of medical devices and/or computer inputs, etc.
  • Turning now to the figures in which like elements are represented by similar labels throughout the several views, embodiments of a 3D reader generate data representing the motions of fingers and, an engine to processes the 3D data to determine keystroke input.
  • FIG. 1 is a conceptual drawing illustrating an exemplary embodiment of a 3D reader and engine utilized to identify intended keystrokes of a user. When a user wishes to type input into a system, the user places their left hand 120 and right hand 130 in front of a 3D reader 100. The reader 100 is able to determine the location and position of each finger of each hand of the user within a 3D space 150. The 3D reader is able to detect motion of the fingers with sufficient fidelity to clearly detect that the user wishes to “press a key” when one or more fingers move.
  • The term “Press a key” is in quotes because physical keys (or virtual keys projected in front of the user or within a display) may not be presented to the user (nor are they needed). Rather, the 3D reader operates to monitor or track the actual physical actions that a user would perform in the pressing of an actual key or virtual key, even in the absent of such actual or virtual keyboard.
  • FIG. 2 is a view of current state of the art technology for entering data using a keyboard. A computer system 200 includes either a physical or virtual keyboard 210 that is located proximate to the computer system 200, typically on the surface of a table 220 directly in front of the computer system 200. A user 220 is positioned in front of the computer system 200 with the users hands 230 resting on or over the keyboard or virtual keyboard 210.
  • FIG. 3 is a conceptual diagram illustrating an environment in which embodiments of the 3D reader and engine can be implemented. In the illustrated embodiment, a computer system 300, equipped with the 3D reader and engine is being utilized by a user 320. The exact same motions that the user would take to type on a physical keyboard are taken when entering data with the 3D reader.
  • It will be appreciated that although the illustrated embodiment is described as a computer system 300 that includes the 3D reader and engine, the 3D reader and/or the engine can be internal or external to the computer system 300.
  • For example, in some embodiments, the 3D reader may be device that is external to the computer system 300 and that communicates with an engine that is located within the computer system 300 (i.e., is a software application operating by the processor within the computer system 300). In this embodiment, motion data is collected by the 3D reader and then relayed either wirelessly or over a physical connection to the engine within the computer system 300. The engine then processes the motion data to determine intended key presses and then interfaces with other applications on the computer system 300 to provide indicia of the key presses.
  • In other embodiments, the 3D reader and the engine may be external to the computer system 300. In such embodiments, the external 3D reader tracks finger motions and then relays 3D motion data to the external engine. The engine can then process the data and determine intended key presses and then communicate the key presses to the computer system 300 through the standard keyboard port, such as USB, RS232 or any other physical port, as well as wireless ports (i.e., Blue Tooth, WIFI, or other wireless technologies). In such an embodiment, the external 3d reader and engine simply appear as a keyboard input to the computer system 300.
  • In yet another embodiment, the 3D reader may be internal to the computer system 300 and then the 3D reader and or computer system 300 communicates the motion data to an external engine. Again, in such an embodiment the engine can then process the data and determine intended key presses and then communicate the key presses to the computer system 300 through the standard keyboard port, such as USB, RS232 or any other physical port, as well as wireless ports (i.e., Blue Tooth, WIFI, or other wireless technologies). In such an embodiment, the external 3d reader and engine simply appear as a keyboard input to the computer system 300.
  • Yet even further, and an embodiment that will be most thoroughly described herein, the 3D reader and engine may be internal to the computer system 300. In this embodiment, the 3D reader detects finger motions, communicates with the engine which then provides key board press data to the computer system 300. However, it will be appreciated that other variations are also anticipated, such as connecting the reader and/or engine to a system over a wired or wireless network, etc.
  • It should be appreciated that the provision of the keyboard press data derived by the engine can be presented in a variety of techniques depending on the systems in which the 3D reader and engine are incorporated. As such, the key press information can be conveyed through the basic input and output system (BIOS) of the computer system, directly into an application expecting key presses, etc. Those skilled in the relevant art will appreciate the various techniques for integrating the 3D reader and engine with the hardware and software of the computer system 300 or any other device with which the 3D reader and engine are used.
  • It should be appreciated that the 3D reader and engine can be implemented in software, hardware, firmware as well as any combination of the same.
  • Using the internally implemented embodiment as a non-limiting example, the 3D reader 350 monitors the motion hand and finger 330 motion of a user 320 when the hands 330 are within a 3D space 340 located proximate to the computer system 300. It will be appreciated that in the embodiments that allow the 3D reader to be external to the computer system 300, the 3D space 340 can be located anywhere and does not have to be proximate to the computer system 300. The 3D reader 350 includes a lens 355 that is used to obtain video data of the hand and finger motions. This motion data is then provided directly to the engine 360 or, the 3D reader 350 may conduct some level of pre-processing prior to communicating the data to the engine 360, such as filtering, compressing, etc.
  • The engine 360 then analyzes the video data and identifies key presses associated with the motion data. The key press information is then conveyed to a processor 370 within the computer system 300 through any of the available techniques.
  • FIG. 4 is a diagram depicting the typical positioning of a user's hands over a keyboard. The users left hand 120 and right hand 130 are shown as being positioned over a keyboard 400 for conventional typing. It will be appreciated, as presented further below, that non-conventional postures may also be utilized in various embodiments and the system can be calibrated for any keyboard structure, typing posture, etc.
  • FIG. 5 is a diagram illustrating the typical positioning of a user's left and right hand on a keyboard with emphasis on the position of the finger tips. Here the reader (not shown), is configured to track the position of each finger by monitoring the location, position and movement of the tips of each finger (left pinky 550L, right pinky 550R, left “ring-finger” 560L, right “ring-finger” 560R, left middle finger 570L, right middle finger 570R, left index finger 580L, right index finger 580R, left thumb 590L and right thumb 590R) overlaid onto a keyboard 400 that resides in a virtual model internal to the reader. Thus, a keyboard or virtual keyboard external to the computer system is not required, although the various embodiments are not limited to operating without such a keyboard, virtual keyboard or any other representation of a keyboard.
  • The position of each finger tip (550L, 550R, 560L, 560R, 570L, 570R, 580L, 580R, 590L and 590R) collectively make a unique position-state. In this case the “default” state (i.e. the default position for touch-typing) is showing indicating to the reader the user is ready to type. Note that this unique default-state is being demonstrated visually in only two dimensions (via the two dimensional location of the finger tips) whereas the reader may have access to multi-dimensions (3D space, infra-red, time etc. . . . ) to model a unique multi-dimensional default-state. For instance, a video camera or motion sensor may be utilized to track motion of fingers in 3D, while an infra-red sensor can also track motion based on heat signatures of muscles etc., and the timing, such as duration of movement, etc., may also be used as motion data.
  • In either case, regardless of how many dimensions are used, a unique default-state is identified. FIG. 5, FIG. 6, FIG. 7, FIG. 8 and FIG. 9 will continue to be described utilizing only a two-dimensional state for the simplicity of demonstrating the logic of the system.
  • FIG. 6 is a layout diagram illustrating detection of the fingers in a typing state to strike the “u” key. FIG. 6 is a top view of the user's finger positions (550L, 550R, 560L, 560R, 570L, 570R, 580L, 580R, 590L and 590R) when the user's right index finger 580R is stretched out intending to actuate the “u” key. Because the right index finger position 580R is in a different location from the default position illustrated in FIG. 5, this constitutes a new state. In this case the collective finger tips positions of the right hand (550R, 560R, 570R, 580R and 590R) make up the “u”-key-state.
  • FIG. 7 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “n” key. Similar to FIG. 6, above FIG. 7 displays the top view of the user's finger positions (550L, 550R, 560L, 560R, 570L, 570R, 580L, 580R, 590L and 590R) when the user's right index finger is bent down intending to type the “n” key. Because the right index finger position 580R is in a different location this constitutes a new state. In this case, the collective finger tips positions of the right hand (550R, 560R, 570R, 580R and 590R) make up the “n”-key-state.
  • FIG. 8 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “4” key. FIG. 8 displays the top view of the user's finger positions (550L, 550R, 560L, 560R, 570L, 570R, 580L, 580R, 590L and 590R) when the user's left index finger is stretched out intending to type the “4” key. Because the left index finger position 580L is in a different location this constitutes a new state. In this case the collective finger tips positions of the left hand (550L, 560L, 570L, 580L and 590L) make up the “4”-key-state. It should also be noted that when a user attempts to type a particular key, such as the “4” key as illustrated in FIG. 8, other fingers on the hand may move as well. For example, in the illustration, the position of the left ring finger position 560L and position of the middle finger 570L have also slightly moved as the user reached for the “4” key.
  • It's not important how far one key-state deviates from another (or the default-state) but, rather, that a unique state is established for each key (that is, there is no requirement that only one finger can move).
  • FIG. 9 is a layout diagram illustrating the detection of the fingers in another typing state to strike the “z” key. FIG. 9 displays the top view of the user's finger positions (550L, 550R, 560L, 560R, 570L, 570R, 580L, 580R, 590L and 590R) when the user's left index finger is bent down intending to type the “z” key. Because the left index finger position 550L is in a different location this constitutes a new state. In this case the collective finger tips positions of the left hand (550L, 560L, 570L, 580L and 590L) make up the “z”-key-state.
  • Similar to FIG. 8, in the state illustrated in FIG. 9, other fingers move in an effort to strike the z key. In the illustrated layout, the left “ring-finger” 560L moves as well when the user “presses” the “z” key.
  • The various embodiments may define key states based on the position of fingers, location of fingers, the position of fingers relative to the other fingers and/or a surface or other item, the position of the hand, looking at the entire fingers or just a certain portion of the fingers, or the like.
  • FIG. 10 is a flow diagram illustrating the actions that an exemplary engine may take in processing the finger motion of a user when producing a key press. It will be appreciated that all of the functions or actions described in conjunction with FIG. 10 may not be necessary in all embodiments of the engine and, the separation or delineation of the functions or actions represented in FIG. 10 is only for descriptive purposes. As such, the illustrated functions may correspond with particular modules or system elements but, in other embodiments functions or actions can be combined or bifurcated into other modules or systems. Further, although the actions describe in connection with FIG. 10 are described as the engine, it will be appreciated that some or all of the actions may be performed by the 3D reader and/or the computer system.
  • Initially, the engine 1000 operates by invoking or conducting a calibration process 1002. As previously mentioned, different users may move their fingers differently for various key presses. More particularly, the size, shape, and movement of various parts of the hand can vary between users and as such, in some embodiments, a calibration process can be performed to more accurately process the various key presses and identify the various key states. However, it will also be appreciated that in some embodiments, a calibration process is not required and the system may default to a particular state definition or load a predefined profile or model. In such an embodiment, key press states for each possible key press and key press combination may be pre-programmed into the system and the system can begin tracking key presses immediately based on the initial settings. It should also be appreciated that although different users will have different sized hands, fingers and motions, general key states can be defined and a threshold level of change can be used to widen the acceptance range for various key states. The calibration process can include instructing a user to move his or her fingers to a particular position to be associated with a particular key press, and then recording data associated with the detected position. In other embodiments, a user may be instructed to perform this process a number of times, such as X times for each of the key presses to be defined. The position data can represent and exact position or a range of positions. The range of positions can be adjusted to be larger or smaller depending on the accuracy of the reader.
  • If a calibration action is performed 1002, the calibration can simply require the user to put his or her hands into the rest or ready position for a period of time, and then, based on the user finger locations, the system can then generate key state positions for each of the possible key presses or key press combinations. In another embodiment, the calibration process may require the user to make certain motions, such as a subset of the possible key presses, and then generate the remaining key states based on this information. Yet in another state, the calibration action may require the user to make motions for typing each of the potential key presses.
  • It should also be appreciated that a calibration process can be useful for defining any keyboard type of structure and any typing style. For instance, if a user desires to have a keyboard that is not based on the standard QWERTY key layout, the engine can accommodate the non-standard keyboard layout by defining the key presses through the calibration process or obtaining pre-created profiles. Similarly, if a user does not type using the common typing techniques, the engine may still be able to detect key press states. For example, if the user types by what is referred to two finger typing (find and poke) the calibration process can still be utilized to identify the key states for fingers moving in this manner. As another example, users that type on handheld devices may type using only their thumbs. In any of these as well as other styles, the 3D reader can detect various states for each of the potential key presses and the calibration process can be used to define the key states for each of the possible key presses.
  • Similarly, without the calibration process, various keyboard layouts and typing techniques can be pre-loaded into the system thus bypassing any requirement for the calibration process 1002. Further, different profiles can be stored in a system and loaded into the system depending on various requirements. For instance, a different profile can be constructed for each user and then loaded into the system when that particular user has logged in or has otherwise identified him or herself to the system.
  • Once system has been calibrated, or the appropriate profiles or internal models have been loaded into the system and the system has been initialized, the engine enters into a loop that operates to look for and detect a state change 1004 caused by motion of the fingers. For instance, if the fingers are in the “at rest” or “ready” state, any movement of the fingers that causes the position of the figures to no longer qualify as the ready state can be interpreted as a state change. At this point, the engine then attempts to evaluate the position of the fingers to identify a new state 1006. If a state change is not detected, the engine simply continues looping or delaying until the fingers are moved a sufficient amount to trigger a state change. In an exemplary embodiment, the system may operate to make hundreds of readings per second and as such, many readings can be made before a state change is detected. Further, readings can be averaged to provide a level of hysteresis to prevent false detections and/or to provide a “debounce” type function.
  • Once a key state change is detected, the engine then determines if the position of the fingers in the new state is a valid key state. If the position of the fingers does not match a valid key state 1006, then the engine may look to see if the new state is a control state 1020. Otherwise, if the new state is a valid key state, the engine operates to enter the key stroke into the system 1008. Entering the key stroke into the system 1008 can be performed in a variety of manners, as previously presented. For instance, entering the key into the system may result in displaying a character associated with the key on a display device. In general, the key stroke is presented to the system through any of a variety of interfaces.
  • After the key press has been entered, the system and/or user can perform validation on the entered key stroke to determine if the correct key was entered or, if the system has falsely detected the wrong key state. For instance, the user can easily see the character that was entered by visually examining a display if the characters are being rendered onto a display. In addition, the system can automatically detect errors or suspicious key press entries. For instance, an application that is expecting input can perform a level of validation of the data. An error can be triggered by the system if the application is expecting a number but receives a letter, if the characters form a misspelled word, or other data input knowledge rules can be applied to detect erroneous key press entries or suspicious key press entries.
  • If the system detects that an incorrect key was entered 1010, the engine can then invoke an adjustment of the internal model or profile 1012. The engine may do this automatically or, the user can be prompted to take a particular action to invoke an adjustment of the internal model. For instance, the user may receive a visual and/or audible prompt indicating that a key press entry error has occurred. The system can employ the use of several actions for adjusting the internal model or profile in response to an key press entry error. The adjustments may be made with regards to a specific user, the specific internal keyboard model or both. In response to an error detection, the variation tolerance for a particular key state may be reduced or tightened to avoid a false detection Likewise, if detected errors are at a very low percentage, such as 10%, 1% or 0.01% as non-limiting examples, the tolerance for the various key states may be increased or loosened so as to minimize processing required to detect new key states. Thus, in some embodiments, the tolerances for the key states may dynamically change in response to detected error levels.
  • If the system and user do not detect a key press entry error, the engine returns to the state change detection loop 1004. The user may then simply continue to move his or her fingers into the next desired key-state by mimicking the physical actions of pressing a key. In this manner, the user effectively types whatever he or she wants on any surface, on the user's lap, or even thin air, as long as the reader can identify and track the user's fingers.
  • However, if the user does detect an error in the key press entry, the user can then take an action to notify the engine that the key press entry is incorrect. As a non-limiting example, if a user observes an incorrect interpretation of a key press, the user can provide an indication of such by use of a predetermined motion that does not conflict with any other valid key-states, such as moving the fingers to a “control state”. Non-limiting examples of control states may include raising all fingers, raising all fingers on the left hand, raising all fingers on the right hand, moving two fingers forward or backward, making a swiping gesture, etc. In operation, the engine detects the state change associated with the control state and determines that the newly detected state is not a valid key state 1006, the engine then operates to determine if the new state is a valid control state 1020.
  • If the new state is not a valid control state 1020, the engine returns to the new state detection loop 1004. However, if the new state is a valid control state 1020, the engine operates to process the control state.
  • Various control states can be defined in various embodiments of the system, and similar to the key press definitions, the control states can be loaded into the system through a calibration process or by being previously loaded into the system as a profile or model. As non-limiting examples, some control states may include, key press entry error, erase back to the first space, cause a caps lock state to be entered, cause a numbers lock to be entered, invoke a reset or force recalibration, etc. In the illustrated engine process, if a control state is detected, the control state will be processed by one of three available modules: a recalibrate module 1022, a process control module 1024 and an entry error module 1026. It will be appreciated that in other embodiments, other additional modules or fewer modules can be utilized for processing these control states as well as others. But, in the illustrated embodiment, only two specific control state processing modules and one catch all processing module are illustrated. If the control state is determined to be a request to recalibrate, the recalibrate module 1022 causes the engine to return to the calibration action 1002.
  • If the control state is a gesture or a predefined action that the user performs to indicate that the entered key press is an error, the system may process the entry error control state 1026 by removing the key from the system (i.e., erase the character from the display) and make adjustments to the internal module by continuing the process at the adjust internal module action 1012. After this occurs, processing returns to the state change detection loop 1004 where the user will then be able to try to enter the key press again by reproducing the intended key-pressing action.
  • If the control state is not a recalibrate or entry error control state, in the illustrated embodiment, a general process control module 1024 processes the control state and then the process returns to the state change detection loop 1004. The process control module can perform functions such as, toggling the state of a caps lock, changing which profile or module is loaded, pausing the state detection loop, etc.
  • Alternative Methods/Implementations
  • The 3D reader and engine can be configured in a variety of manners and different embodiments may include differing functionalities. A few non-limiting examples of variations that may be implemented into different embodiments are reviewed below.
  • In some embodiments, the number of fingers required in determining a unique key state or control state may vary. For instance, as previously mentioned, embodiments may be implemented to support two finger typing. Other embodiments may also be used for users that have missing digits. It will also be appreciated that the calibration process can accommodate any such requirements.
  • In some embodiments, the number of ways to determine if a key-state has been reached may vary. For instance, some users may be very chaotic in their typing and require a large margin of error. As such, more samples may be required for detecting a valid key state in such circumstances. Further, if a larger number of key entry errors are detected, the system may automatically increase the number of samples required to detect a valid key state change. Similarly, the system may progressively decrease the number of samples until errors are detected.
  • Many methods may be utilized to identify the ready state. As previously mentioned, the user may place his or her fingers into a standard default position for typing. However, in other embodiments, the user may not have to have his/her hands within the traditional touch-typing position but may, for example, have both hands in a first to tell the system that the user is ready to type.
  • As previously mentioned, the user can place his or her hands onto a surface for the 3D reader. However, in other embodiments, an actual surface is not necessary and the user can type in mid air. Further, the hands do not necessarily have to be together. For example, hands may be placed in a vertical orientation, may be separated by considerable distance, the palms can be facing the 3D reader, etc.
  • Although embodiments have been described for a keyboard detector in which the user mimics keyboard, presses, it should be appreciated that other techniques to identify key states may also be used. For instance, the 3D reader may be utilized to detect American Sign Language, or other sign language hand gestures for various letters and numbers, actually writing motions of a writing instrument forming the letters, reading lip motions when a user speaks the letters or sounds, etc. Further, with the calibration process, the user can virtually define any gestures or movements as valid key presses.
  • It will also be appreciated that the various embodiments can be utilized for full key boards, extra functionality key boards, number pads, court reporting and shorthand keyboards, etc.
  • It will also be appreciated that a key state may not have to be unique to provide value to the system. For instance, a lower-case and upper-case key state may be considered the same key state in some embodiments. In addition, a given key state may be defined in many possible dimensions. For instance, the key state may be defined in multiple three dimensional definitions depending on the angle of view of the hands. Further, key states may be defined in two dimensions as well. Even further, the key states may be identified using cameras, accelerometers, infra-red detectors as well as other sensors and detectors.
  • The engine may process the key state data in a variety of manners. As previously mentioned, key states may be defined with levels of variation tolerance. Thus, the key state for pressing a “u” key may be expanded such that a “u” key press is detected if the right index finger is on the left side, right side, top, bottom or center of an area defining the “u” key zone. Further, the tolerance of the “u” key zone can be increased if there is not a threshold amount of motion detected by the other fingers, such as the middle and ring finger on the right hand, which would be indicative of the “y” key being pressed.
  • FIG. 11 is a functional block diagram of the components of an exemplary environment or system or sub-system implementing various aspects of embodiments of the reader and engine or, in which the reader and engine may be incorporated. It will be appreciated that not all of the components illustrated in FIG. 11 are required in all embodiments of the reader and engine but, each of the components are presented and described in conjunction with FIG. 11 to provide a complete and overall understanding of the components. The environment can include a general computing platform 1100 illustrated as including a processor/memory device 1102/1104 that may be integrated with each other or, communicatively connected over a bus or similar interface 1106. The processor 1102 can be a variety of processor types including microprocessors, micro-controllers, programmable arrays, custom IC's etc. and may also include single or multiple processors with or without accelerators or the like. The memory element 1104 may include a variety of structures, including but not limited to RAM, ROM, magnetic media, optical media, bubble memory, FLASH memory, EPROM, EEPROM, etc. The processor 1102, or other components in the controller may also provide components such as a real-time clock, analog to digital convertors, digital to analog convertors, etc. The processor 1102 also interfaces to a variety of elements including a control/device interface 1112, a display adapter 1108, an audio adapter 1110, and network/device interface 1114. The control/device interface 1112 provides an interface to external controls or other devices such as the video detector 1130. Other devices may include motion sensors and other sensors, actuators, drawing heads, nozzles, cartridges, pressure actuators, leading mechanism, drums, step motors, cameras, a keyboard, a mouse, a pin pad, an audio activated device, as well as a variety of the many other available input and output devices or, another computer or processing device or the like. In the various embodiments, the devices may specifically include a camera and/or infrared sensor for detecting finger or object motion. However, it will be appreciated that the camera and/or infrared sensor may also be included internal to the platform 1100. The display adapter 1108 can be used to present a variety of information onto a display device 1116, as well as provide other visual aspects of a user interface. An exemplary display device may include an LED display, LCD display, one or more LEDs or other display devices. The audio adapter 1110 interfaces to and drives a sound producing element 1118, such as a speaker or speaker system, buzzer, bell, etc. The network/interface 1114 may interface to a network 1120 which may be any type of network including, but not limited to the Internet, a global network, a wide area network, a local area network, a wired network, a wireless network or any other network type including hybrids. Through the network 1120, or even directly, the controller 1100 can interface to other devices or computing platforms such as one or more servers 1122 and/or third party systems 1124. A battery or power source provides power for the controller 1100.
  • In the description and claims of the present application, each of the verbs, “comprise”, “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements, or parts of the subject or subjects of the verb.
  • In this application the words “unit” and “module” are used interchangeably. Anything designated as a unit or module may be a stand-alone unit or a specialized module. A unit or a module may be modular or have modular aspects allowing it to be easily removed and replaced with another similar unit or module. Each unit or module may be any one of, or any combination of, software, hardware, and/or firmware.
  • The present invention has been described using detailed descriptions of embodiments thereof that are provided by way of example and are not intended to limit the scope of the invention. The described embodiments comprise different features, not all of which are required in all embodiments of the invention. Some embodiments of the present invention utilize only some of the features or possible combinations of the features. Variations of embodiments of the present invention that are described and embodiments of the present invention comprising different combinations of features noted in the described embodiments will occur to persons of the art.
  • It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described herein above. Rather the scope of the invention is defined by the claims that follow.

Claims (20)

What is claimed is:
1. A system for providing input to a computing device without requiring a keyboard, the system comprising:
a sensor configured to:
detect finger activity of a user;
generate data associated with the finger activity; and
provide the finger activity data to an engine;
the engine configured to:
process the received finger activity data to identify a key press; and
provide indicia data associated with the identified key press to the computing device, wherein the computing device can react to the indicia data as data entry.
2. The system of claim 1, wherein the engine is further configured to receive input from a user indicating that indicia data for the identified key press is incorrect.
3. The system of claim 2, wherein the engine is further configured to instruct the computing device to discard the incorrect indicia data and commence receiving of finger activity data.
4. The system of claim 1, wherein the sensor is a camera and the finger activity data is video data.
5. The system of claim 1, wherein the finger activity includes finger motion and finger position.
6. The system of claim 5, wherein the sensor can detect finger motion and finger position on a flat surface.
7. The system of claim 5, wherein the sensor can detect finger motion and finger position without a flat surface.
8. The system of claim 1, wherein the engine is further configured to access a model file defining finger positions that are associated with key press states and associating the key press states with particular indicia data.
9. The system of claim 8, wherein the engine is further configured to enter a training mode to generate the model file.
10. The system of claim 9, wherein the engine is further configured to generate the model file in the training mode by:
instructing a user to mimic typing a particular key;
recording the position of the user's fingers' and
associating the position of the user's fingers as a key press state associated with the particular key.
11. The system of claim 9, wherein the engine is further configured to generate the model file in the training mode by:
instructing a user to mimic typing a particular key X times;
recording the range of positions of the user's fingers for each of the X times; and
associating the range of positions of the user's fingers as a key press state associated with the particular key.
12. The system of claim 11, wherein the engine is further configured to decrease the range of positions of the user's fingers if errors are detected.
13. The system of claim 11, wherein the engine is further configured to increase the range of positions of the user's fingers if the percentage of errors are small.
14. The system of claim 10, wherein the engine is further configured to allow an amount variation in the position of the user's fingers for a key press state.
15. The system of claim 10, wherein the engine is further configured to decrease the amount of variation in the position of the user's fingers if errors are detected.
16. The system of claim 10, wherein the engine is further configured to increase the amount of variation in the position of the user's fingers if the percentage of errors are small.
17. The system of claim 2, wherein the input from a user indicating that indicia data for the identified key press is incorrect is provided by a user making a finger movement that is not associated with a particular key press but is associated with a control state.
18. The system of claim 2, wherein the engine is further configured to generate a model file defining finger positions that are associated with key press states and associating the key press states with particular indicia data by entering a training mode configured to:
instruct a user to make a movement to a finger position to be associated with a particular key;
recording data representing the finger position;
associating the data representing the finger position as a key press state associated with the particular key; and
upon receiving input from a user indicating that indicia adjusting the data representing the finger position.
19. The system of claim 1, wherein the keyboard can be any layout.
20. The system of claim 1, wherein the finger activity can be any motion that is defined as being associated with a particular key press.
US13/904,029 2012-05-29 2013-05-29 Method of capturing system input by relative finger positioning Expired - Fee Related US9122395B2 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US13/904,029 US9122395B2 (en) 2012-05-29 2013-05-29 Method of capturing system input by relative finger positioning
US14/745,921 US9361023B2 (en) 2012-05-29 2015-06-22 Method of capturing system input by relative finger positioning

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201261652826P 2012-05-29 2012-05-29
US13/904,029 US9122395B2 (en) 2012-05-29 2013-05-29 Method of capturing system input by relative finger positioning

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/745,921 Continuation US9361023B2 (en) 2012-05-29 2015-06-22 Method of capturing system input by relative finger positioning

Publications (2)

Publication Number Publication Date
US20130321279A1 true US20130321279A1 (en) 2013-12-05
US9122395B2 US9122395B2 (en) 2015-09-01

Family

ID=49669583

Family Applications (2)

Application Number Title Priority Date Filing Date
US13/904,029 Expired - Fee Related US9122395B2 (en) 2012-05-29 2013-05-29 Method of capturing system input by relative finger positioning
US14/745,921 Expired - Fee Related US9361023B2 (en) 2012-05-29 2015-06-22 Method of capturing system input by relative finger positioning

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/745,921 Expired - Fee Related US9361023B2 (en) 2012-05-29 2015-06-22 Method of capturing system input by relative finger positioning

Country Status (1)

Country Link
US (2) US9122395B2 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015102658A1 (en) * 2014-01-03 2015-07-09 Intel Corporation Systems and techniques for user interface control
US20170097739A1 (en) * 2014-08-05 2017-04-06 Shenzhen Tcl New Technology Co., Ltd. Virtual keyboard system and typing method thereof
US20170177203A1 (en) * 2015-12-18 2017-06-22 Facebook, Inc. Systems and methods for identifying dominant hands for users based on usage patterns
US20170228153A1 (en) * 2014-09-29 2017-08-10 Hewlett-Packard Development Company, L.P. Virtual keyboard
US20190384412A1 (en) * 2013-08-26 2019-12-19 Paypal, Inc. Gesture identification
CN112202429A (en) * 2020-10-15 2021-01-08 桂林优利特医疗电子有限公司 Multifunctional key control system and control method thereof
CN113238705A (en) * 2021-05-10 2021-08-10 青岛小鸟看看科技有限公司 Virtual keyboard interaction method and system
US11327651B2 (en) * 2020-02-12 2022-05-10 Facebook Technologies, Llc Virtual keyboard based on adaptive language model
US20220269333A1 (en) * 2021-02-19 2022-08-25 Apple Inc. User interfaces and device settings based on user identification
US20220365655A1 (en) * 2021-05-10 2022-11-17 Qingdao Pico Technology Co., Ltd. Virtual Keyboard Interaction Method and System

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10332490B2 (en) * 2012-09-25 2019-06-25 William Frederick Moyer Piano systems and methods for the enhanced display of the hands of a pianist
CN109032496A (en) * 2018-06-20 2018-12-18 四川斐讯信息技术有限公司 A kind of change intelligent keyboard key is shown and the method and system of keypad tone

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614422B1 (en) * 1999-11-04 2003-09-02 Canesta, Inc. Method and apparatus for entering data using a virtual input device
US20100231522A1 (en) * 2005-02-23 2010-09-16 Zienon, Llc Method and apparatus for data entry input
US20110216007A1 (en) * 2010-03-07 2011-09-08 Shang-Che Cheng Keyboards and methods thereof
US20140337786A1 (en) * 2010-04-23 2014-11-13 Handscape Inc. Method for controlling a virtual keyboard from a touchpad of a computerized device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614422B1 (en) * 1999-11-04 2003-09-02 Canesta, Inc. Method and apparatus for entering data using a virtual input device
US20100231522A1 (en) * 2005-02-23 2010-09-16 Zienon, Llc Method and apparatus for data entry input
US20110216007A1 (en) * 2010-03-07 2011-09-08 Shang-Che Cheng Keyboards and methods thereof
US20140337786A1 (en) * 2010-04-23 2014-11-13 Handscape Inc. Method for controlling a virtual keyboard from a touchpad of a computerized device

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11086404B2 (en) * 2013-08-26 2021-08-10 Paypal, Inc. Gesture identification
US20190384412A1 (en) * 2013-08-26 2019-12-19 Paypal, Inc. Gesture identification
US9395821B2 (en) 2014-01-03 2016-07-19 Intel Corporation Systems and techniques for user interface control
WO2015102658A1 (en) * 2014-01-03 2015-07-09 Intel Corporation Systems and techniques for user interface control
US20170097739A1 (en) * 2014-08-05 2017-04-06 Shenzhen Tcl New Technology Co., Ltd. Virtual keyboard system and typing method thereof
US9965102B2 (en) * 2014-08-05 2018-05-08 Shenzhen Tcl New Technology Co., Ltd Virtual keyboard system and typing method thereof
US20170228153A1 (en) * 2014-09-29 2017-08-10 Hewlett-Packard Development Company, L.P. Virtual keyboard
US10585584B2 (en) * 2014-09-29 2020-03-10 Hewlett-Packard Development Company, L.P. Virtual keyboard
US20170177203A1 (en) * 2015-12-18 2017-06-22 Facebook, Inc. Systems and methods for identifying dominant hands for users based on usage patterns
US11327651B2 (en) * 2020-02-12 2022-05-10 Facebook Technologies, Llc Virtual keyboard based on adaptive language model
US11899928B2 (en) 2020-02-12 2024-02-13 Meta Platforms Technologies, Llc Virtual keyboard based on adaptive language model
CN112202429A (en) * 2020-10-15 2021-01-08 桂林优利特医疗电子有限公司 Multifunctional key control system and control method thereof
US20220269333A1 (en) * 2021-02-19 2022-08-25 Apple Inc. User interfaces and device settings based on user identification
US20220365655A1 (en) * 2021-05-10 2022-11-17 Qingdao Pico Technology Co., Ltd. Virtual Keyboard Interaction Method and System
CN113238705A (en) * 2021-05-10 2021-08-10 青岛小鸟看看科技有限公司 Virtual keyboard interaction method and system

Also Published As

Publication number Publication date
US9361023B2 (en) 2016-06-07
US20150286403A1 (en) 2015-10-08
US9122395B2 (en) 2015-09-01

Similar Documents

Publication Publication Date Title
US9361023B2 (en) Method of capturing system input by relative finger positioning
US11755137B2 (en) Gesture recognition devices and methods
Yi et al. Atk: Enabling ten-finger freehand typing in air based on 3d hand tracking data
Gil et al. TriTap: identifying finger touches on smartwatches
US9285840B2 (en) Detachable sensory-interface device for a wireless personal communication device and method
US9274551B2 (en) Method and apparatus for data entry input
KR101338043B1 (en) Cognitive Rehabilitation System and Method Using Tangible Interaction
KR20120114139A (en) Dynamic text input method using on and above surface sensing of hands and fingers
US9864516B2 (en) Universal keyboard
US20100127969A1 (en) Non-Contact Input Electronic Device and Method Thereof
CN104023802A (en) Control of electronic device using nerve analysis
JP5965540B2 (en) Method and apparatus for recognizing key input on virtual keyboard
CN103995610A (en) Method for user input from alternative touchpads of a handheld computerized device
US20050270274A1 (en) Rapid input device
US9557825B2 (en) Finger position sensing and display
JP2004013736A5 (en)
Ellavarason et al. A framework for assessing factors influencing user interaction for touch-based biometrics
CN104050467A (en) Method and device for achieving fingering practice through fingerprint recognition
KR101392981B1 (en) Method for recognizing key input on a virtual keyboard and apparatus for the same
Yeh Investigating New Forms of Single-handed Physical Phone Interaction with Finger Dexterity
Bos et al. Input device alternatives for infantry Soldiers

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

FEPP Fee payment procedure

Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Expired due to failure to pay maintenance fee

Effective date: 20190901