RU2605370C2 - System for recognition and tracking of fingers - Google Patents

System for recognition and tracking of fingers Download PDF

Info

Publication number
RU2605370C2
RU2605370C2 RU2013154102/08A RU2013154102A RU2605370C2 RU 2605370 C2 RU2605370 C2 RU 2605370C2 RU 2013154102/08 A RU2013154102/08 A RU 2013154102/08A RU 2013154102 A RU2013154102 A RU 2013154102A RU 2605370 C2 RU2605370 C2 RU 2605370C2
Authority
RU
Russia
Prior art keywords
hand
user
system
rectangle
fingers
Prior art date
Application number
RU2013154102/08A
Other languages
Russian (ru)
Other versions
RU2013154102A (en
Inventor
Энтони ЭМБРУС
Киунгсук Дэвид ЛИ
Эндрю КЭМПБЕЛЛ
Дэвид ХЕЙЛИ
Брайан МАУНТ
Альберт РОБЛЕС
Дэниел ОСБОРН
Шон РАЙТ
Нахил ШАРКАСИ
Дэйв ХИЛЛ
Дэниел МАККАЛЛОК
Original Assignee
МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201161493850P priority Critical
Priority to US61/493,850 priority
Priority to US13/277,011 priority patent/US8897491B2/en
Priority to US13/277,011 priority
Application filed by МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи filed Critical МАЙКРОСОФТ ТЕКНОЛОДЖИ ЛАЙСЕНСИНГ, ЭлЭлСи
Priority to PCT/US2012/040741 priority patent/WO2012170349A2/en
Publication of RU2013154102A publication Critical patent/RU2013154102A/en
Application granted granted Critical
Publication of RU2605370C2 publication Critical patent/RU2605370C2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/40Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment
    • A63F13/42Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle
    • A63F13/428Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving motion or position input signals, e.g. signals representing the rotation of an input controller or a player's arm motions sensed by accelerometers or gyroscopes
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/02Accessories
    • A63F13/06Accessories using player-operated means for controlling the position of a specific area display
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/10Control of the course of the game, e.g. start, progess, end
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/20Input arrangements for video game devices
    • A63F13/21Input arrangements for video game devices characterised by their sensors, purposes or types
    • A63F13/213Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • G06F3/042Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means
    • G06F3/0425Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by opto-electronic means using a single imaging device like a video camera for tracking the absolute position of a single or a plurality of objects with respect to an imaged reference surface, e.g. video camera imaging a display or a projection screen, a table or a wall surface, on which a computer generated image is displayed or projected
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for entering handwritten data, e.g. gestures, text
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00335Recognising movements or behaviour, e.g. recognition of gestures, dynamic facial expressions; Lip-reading
    • G06K9/00355Recognition of hand or arm movements, e.g. recognition of deaf sign language
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/00362Recognising human body or animal bodies, e.g. vehicle occupant, pedestrian; Recognising body parts, e.g. hand
    • G06K9/00375Recognition of hand or arm, e.g. static hand biometric or posture recognition
    • G06K9/00389Static hand gesture recognition
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/6045Methods for processing data by generating or executing the game program for mapping control signals received from the input arrangement into game commands
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6607Methods for processing data by generating or executing the game program for rendering three dimensional images for animating game characters, e.g. skeleton kinematics

Abstract

FIELD: information technology.
SUBSTANCE: invention relates to formation of a computer model of an end user, including a model of hand and fingers of user captured by means of an image sensor in a natural user interface system. Technical result is achieved by forming a model of hand of user, including one or more fingers, by extracting a descriptor for extraction of hand data.
EFFECT: technical result is improvement of accuracy of recognition and tracking of user's body, including position of fingers and hands.
17 cl, 23 dwg

Description

BACKGROUND

In the past, computing applications such as computer games and multimedia applications have used controllers, remote controls, keyboards, mice, and the like to allow users to control game characters or other aspects of the application. Later, computer games and multimedia applications began using cameras and gesture recognition software to provide a natural user interface (“NUI”). Using a natural user interface, primary joint data and user gestures are detected, interpreted, and used to control game characters or other aspects of the application program.

One of the problems of the natural user interface system is distinguishing a person in the field of view of the image sensor and correctly identifying the location of his body parts, including hands and fingers, in the field of view. Known programs for tracking arms, legs, head and torso. However, given the subtle details and the wide variety of positions of the user's hands, traditional systems are not able to satisfactorily recognize and track the user's body, including the positions of the fingers and hands.

SUMMARY OF THE INVENTION

Systems and methods for recognizing and tracking the user's skeletal joints, including the positions of the wrist and fingers, using the natural user interface system are disclosed here. In the examples, tracking positions of the hand and fingers can be used by natural user interface systems to trigger events, such as selecting, making contact, or grabbing and dragging objects on the screen. Through the present technology for recognizing and tracking the positions and movements of the hands, it is possible to provide many other gestures, control actions and applications. By determining the state of the user's hand and fingers, the user's interactivity with the natural user interface system can be increased and simpler and more intuitive interfaces can be presented to the user.

In one example, the present disclosure relates to a method of forming a model of a user's hand, including one or more fingers, for a natural user interface, comprising the steps of: (a) receiving image data of a user interacting with a natural user interface; and (b) analyzing the image data to identify the hands in the image data, said step (b) comprising the steps of: (b) (1) analyzing the depth data from the image data obtained in said step (a) to segment the image data into the hand data, and (b) (2) the shape descriptor is extracted by applying one or more filters to the hand image data identified in the aforementioned step (b) (1), the one or more filters analyzing the hand image data compared with image data outside the boundaries of the wrist to distinguish the shape and orientation of the hand.

In a further example, the present disclosure relates to a system for generating a model of a user's hand, including one or more fingers, for a natural user interface, comprising: skeleton recognition means for recognizing a user's skeleton from received image data; image segmentation means for segmenting one or more areas of the body into an area representing a user's hand; and descriptor extracting means for retrieving data representing a hand including one or more fingers, and the orientation of the hand, wherein the descriptor retrieval uses a plurality of filters to analyze pixels in an area representing the hand, each filter in a plurality of filters the position and orientation of the hand, while the descriptor extractor combines the results of each filter to achieve the best estimate of the position and orientation of the hand.

In another example, the present disclosure relates to a computer-readable medium not consisting of a modulated data signal, wherein the computer-readable medium has computer-executable instructions for programming a processor to execute a method of generating a model of a user's hand including one or more fingers for a natural user interface comprising the steps of: (a) receiving image data of a user interacting with a natural user interface; (b) analyzing the image data to identify the hand in the image data; and (c) comparing the image data of the identified hand with the predetermined positions of the hand to determine whether the user has performed one of the following predefined hand gestures or control actions: (c) (1) the count on the user's fingers, (c) (2) execution of the “all right” gesture (“ok”), (c) (3) pressing a virtual button, (c) (4) clamping the thumb and the other fingers together, (c) (5) writing or drawing, (c ) (6) modeling, (c) (7) control of the doll, (c) (8) rotating the knob or opening the combination lock, (c) (9) shooting and with weapons, (c) (10) performing a click gesture, (c) (11) performing a gesture in which the finger can be used on an open palm to scroll and move through the virtual space, and (c) (12) moving fingers with scissors to control the legs of a virtual character.

This description of the invention is given in order to simplify the presentation of a selection of concepts, which are further described in the detailed description. This description of the invention is not intended to identify key features or main features of the claimed invention, nor is it intended to be used as a means of determining the scope of the claimed invention. In addition, the claimed invention is not limited to implementations that eliminate any or all of the disadvantages noted in any part of this disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

Figure 1A shows an illustrative embodiment of a target recognition, analysis, and tracking system.

Figure 1B shows a further illustrative embodiment of a target recognition, analysis, and tracking system.

Figure 1C shows another additional illustrative embodiment of a target recognition, analysis and tracking system.

Figure 2 shows an illustrative embodiment of a capture device that can be used in a target recognition, analysis, and tracking system.

Figure 3 shows an illustrative model of the body used to represent a human target.

Figure 4 shows a substantially front view of an illustrative skeletal model used to represent a human target.

Figure 5 shows a side view of an illustrative skeletal model used to represent a human target.

6 shows a flowchart of a conveyor for tracking a target in accordance with an embodiment of the present technology.

Figure 7 shows an illustrative method for determining a state of a user's hand in accordance with an embodiment of the present disclosure.

Figure 8 is a flowchart of an image segmentation means in accordance with an embodiment of the present disclosure.

9 is a flowchart of a pixel classification filter in accordance with an embodiment of the present disclosure.

Figure 10 is a decision tree of a pixel classification filter in accordance with an embodiment of the present disclosure.

Figures 11A and 11B illustrate finger tip identification using a pixel classification filter in accordance with an embodiment of the present disclosure.

12 illustrates finger identification using a pixel classification filter in accordance with an embodiment of the present disclosure.

Figure 13 illustrates a section of a hand identified using a pixel classification filter in accordance with an embodiment of the present disclosure.

Figure 14 illustrates the identification of a hand and a finger using a pixel classification filter in accordance with an embodiment of the present disclosure.

15 is a flowchart of a curvature analysis filter in accordance with an embodiment of the present disclosure.

16 illustrates the identification of a hand and a finger using a curvature analysis filter in accordance with an embodiment of the present disclosure.

Figure 17 illustrates an analysis of an open and closed hand using a depth histogram filter in accordance with an embodiment of the present disclosure.

Figure 18 is a flowchart of a supervisor filter for classifying a hand position based on hand filter.

19A shows an illustrative embodiment of a computing environment that can be used to interpret one or more gestures in a target recognition, analysis, and tracking system.

19B illustrates another illustrative embodiment of a computing environment that can be used to interpret one or more gestures in a target recognition, analysis, and tracking system.

DETAILED DESCRIPTION

Now with reference to FIG. 1A-19B, embodiments of the present technology that are generally related to a conveyor for generating a computer model of a target user including a model of user’s hands and fingers captured by an image sensor in a natural user interface (NUI) system will be described. A computer model can be generated once for each frame of captured image data and represents the best estimate of the position, including posture, of the user during the captured frame. The generated model of hands for each frame can be used by a game or other application program to determine such moments as user gestures and control actions. The hand model can also be returned to the conveyor to assist in future model definitions.

As shown in FIG. 1A-2, the hardware for implementing the present technology includes a target recognition, analysis and tracking system 10 that can be used to recognize, analyze and / or track a human target, such as a user 18. Embodiments of the recognition, analysis and tracking system 10 goals include a computing environment 12 for executing a game or other application program. Computing medium 12 may include hardware components and / or software components so that computing environment 12 can be used to execute application programs, such as gaming and non-game application programs. In one embodiment, computing environment 12 may include a processor, such as a standardized processor, a dedicated processor, microprocessor, and the like, that can execute instructions stored on a processor readable data storage device to execute the processes described herein.

System 10 further includes a capture device 20 for capturing image and sound data related to one or more users and / or objects detected by the capture device. In embodiments, the capture device 20 may be used to capture information related to body and hand movements and / or gestures and speech of one or more users, which is received by the computing environment and used for aspects of reproduction, interaction, and / or control in a game or other application program. Examples of computing environment 12 and capture device 20 are explained in more detail below.

Embodiments of the target recognition, analysis, and tracking system 10 may be coupled to an audiovisual (A / V) device 16 having a display 14. The device 16, for example, may be a television, telephone, computer monitor, high definition television (HDTV), and etc., which can provide the user with visual and audio information of a game or application. For example, computing environment 12 may include a video adapter, such as a video card, and / or a sound adapter, such as a sound card, that can provide audio / visual signals corresponding to a game or other application program. The audiovisual device 16 may receive audio / visual signals from the computing environment 12 and may then output 18 to the user visual and / or audio information of the game or application corresponding to the audio / visual signals. In accordance with one embodiment, audiovisual device 16 may be connected to computing environment 12, for example, via an S-Video cable, coaxial cable, HDMI cable, DVI cable, VGA cable, component video cable, and the like.

In embodiments, computing environment 12, audiovisual device 16, and capture device 20 may interact to play a video image or screen character 19 on display 14. For example, FIG. 1A shows a user 18 playing a football application. The user’s movements are tracked and used to animate the motion of the video image 19. In embodiments, the video image 19 mimics the movements of the user 18 in real-world space so that the user 18 can perform movements and gestures that control the movements and actions of the video image 19 on the display 14.

As explained above, motion estimation programs, such as skeleton imaging systems, may lack the ability to detect subtle gestures by the user, such as, for example, the movement of a user's hand. For example, the user may want to interact with the natural user interface system 10 by viewing by scrolling and controlling the user interface 21 with his hand, as shown in FIG. 1B. The user may alternatively try to perform various gestures, for example, by opening and / or closing his hand, as shown by numbers 23 and 25 in FIG. 1C.

In accordance with this, the systems and methods described below are aimed at determining the state of the user's hand. For example, the action of closing and opening the hand can be used by such systems for initiating events, such as selecting, making contact, or grabbing and dragging objects, for example, object 27 (FIG. 1C) on the screen. These actions may otherwise correspond to a button press when using the controller. Such advanced interaction without controllers can be used as an alternative to approaches based on the sweep and hover of the hand, which may not be intuitive or difficult. Through the present technology for recognizing and tracking the movements of the hands, it is possible to provide many other gestures, control actions and applications, some of which are described with further details below. By determining the state of the user's hand, as described below, the user interactivity with the system can be increased, and simpler and more intuitive interfaces can be presented to the user.

FIG. 1A-1B include static background objects 23, such as a floor, a chair, and a plant. They are objects within the field of view (FOV) captured by the capture device 20, but do not change from frame to frame. In addition to the floor, chair, and plant shown, static objects can be any objects captured by image cameras in the capture device 20. Additional static objects in the scene can include any walls, ceiling, windows, doors, wall decorations, etc.

Suitable examples of system 10 and its components are found in the following simultaneous patent applications, all of which are incorporated herein by reference: US Patent Application No. 12/475094, entitled “Segmentation of the Environment and / or Purpose,” filed May 29, 2009; US Patent Application No. 12/511850, entitled “Automatic Generation of Visual Presentation,” filed July 29, 2009; US Patent Application No. 12/474655, entitled “Gesture Instrument”, filed May 29, 2009; US Patent Application No. 12/603437, entitled “Pose Tracking Conveyor”, filed October 21, 2009; US Patent Application No. 12/475308, entitled “Device for Identifying and Tracking Multiple People Over Time”, filed May 29, 2009, US Patent Application No. 12/575388, named “Human Tracking System”, filed October 7, 2009 ; US Patent Application No. 12/422661, entitled “Gesture Recognition System Architecture,” filed April 13, 2009; and US Patent Application No. 12/391150, entitled “Standard Gestures,” filed February 23, 2009.

FIG. 2 shows an illustrative embodiment of a capture device 20 that can be used in target recognition, analysis, and tracking system 10. In an illustrative embodiment, the capture device 20 may be configured to capture video information having a depth image, which may include depth values, through any suitable technique, including, for example, travel time, structured lighting, stereoscopic image, and the like. . In accordance with one embodiment, the capture device 20 may organize the calculated depth information in “Z-levels,” or levels that may be perpendicular to the Z axis extending from the depth of the camera along its line of sight. The X and Y axes can be defined as perpendicular to the Z axis. The Y axis can be a vertical, and the X axis can be a horizontal. Together, these X, Y, and Z axes define the three-dimensional real-world space captured by the capture device 20.

As shown in FIG. 2, the capture device 20 may include an image camera component 22. According to an exemplary embodiment, the image camera component 22 may be a depth camera that can capture a scene depth image. The depth image may include a two-dimensional (2D) pixel region of the captured scene, where each pixel in the two-dimensional pixel region may represent a depth value, such as length or distance, for example, in centimeters, millimeters, and the like. an object in a captured scene from the camera.

As shown in FIG. 2, in accordance with an illustrative embodiment, the image camera component 22 may include an infrared (IR) light component 24, a three-dimensional (3D) camera 26, and an RGB camera 28 that can be used to capture an image of the depth of the scene. For example, when analyzing the transit time of the infrared light component 24 from the capture device 20, it can emit infrared light onto the scene and can then use sensors (not shown) to detect backscattered light from the surface of one or more targets and objects in the scene using, for example, three-dimensional cameras 26 and / or cameras 28 RGB.

In some embodiments, pulsed infrared light can be used so that the time between the outgoing light pulse and the corresponding incoming light pulse can be measured and used to determine the physical distance from the capture device 20 to a specific location on targets or objects in the scene. Additionally, in other illustrative embodiments, the phase of the outgoing light wave can be compared with the phase of the incoming light wave to determine the phase shift. The phase shift can then be used to determine the physical distance from the capture device 20 to a specific location on targets or objects.

In accordance with another illustrative embodiment, the analysis of travel time can be used to indirectly determine the physical distance from the capture device 20 to a specific location on targets or objects by analyzing the intensity of the reflected light beam over time through various techniques, including, for example, imaging with using intermittent light pulses.

In another illustrative embodiment, the capture device 20 may use structured light to capture depth information. In such an analysis, structured light (that is, light visually reproduced with a known pattern, such as a grid pattern or a striped pattern) can be projected onto the scene, for example, through the infrared light component 24. After a collision with the surface of one or more targets or objects in the scene, the pattern may become deformed in response. Such deformation of the pattern can be captured, for example, by a three-dimensional camera 26 and / or an RGB camera 28 and then can be analyzed to determine the physical distance from the capture device 20 to a specific location on targets or objects.

According to another embodiment, the capture device 20 may include two or more physically separated cameras that can view the scene from different angles to obtain stereoscopic visual data that can be analyzed to generate depth information. In another exemplary embodiment, the capture device 20 may use point cloud data and target digitization techniques to detect signs of a user. Other sensor systems may be used in further embodiments, such as, for example, an ultrasound system capable of detecting x, y, and z axes.

The capture device 20 may further include a microphone 30. The microphone 30 may include a sensor that can receive and convert sound into an electrical signal. In accordance with one embodiment, the microphone 30 may be used to reduce feedback between the capture device 20 and the computing environment 12 in the target recognition, analysis, and tracking system 10. Additionally, the microphone 30 can be used to receive audio signals, which can also be provided by the user to control application programs, such as gaming applications, non-gaming applications, and the like, that can be executed by computing environment 12.

In an illustrative embodiment, the capture device 20 may further include a processor 32, which may be in functional communication with the image camera component 22. The processor 32 may include a standardized processor, a specialized processor, a microprocessor, and the like, which may execute instructions, which may include instructions for acquiring a depth image, determining whether a suitable target can be included in a depth image, converting a suitable target into a skeletal representation or model of the goal or execution of any other suitable team.

The capture device 20 may further include a memory component 34 that can store instructions that may be executed by the processor 32, images or image frames captured by a three-dimensional camera or an RGB camera, or any other suitable information, images, or the like. According to an illustrative embodiment, the memory component 34 may include random access memory (RAM; RAM), read-only memory (ROM), cache, flash memory, hard disk, or any other suitable storage component. As shown in FIG. 2, in one embodiment, the memory component 34 may be a separate component associated with the image camera component 22 and the processor 32. According to another embodiment, the memory component 34 may be integrated in the processor 32 and / or the camera component 22 Images.

As shown in FIG. 2, the capture device 20 may be in communication with the computing environment 12 via a communication line 36. Communication line 36 may be a wired connection, including, for example, a USB connection, a Firewire connection, an Ethernet cable connection, and the like. and / or a wireless connection, such as 802.11b, g, a, or n wireless communication. In accordance with one embodiment, computing environment 12 may provide a clock for capture device 20, which may be used, for example, to determine when to capture a scene, via communication link 36.

Additionally, the capture device 20 may provide depth information and images captured, for example, by a three-dimensional camera 26 and / or an RGB camera 28. Using these devices, a partial skeletal model can be developed in accordance with the present technology, and the resulting data is provided to computing environment 12 via communication line 36.

Computing medium 12 may also include gesture recognition means 190 for gesture recognition, as explained below. According to the present system, computing environment 12 may also include skeleton recognition means 192, image segmentation means 194, descriptor extraction means 196, and classifier means 198. Each of these software tools is described in more detail below.

Figure 3 shows a non-limiting visual representation of an illustrative body model 70 formed by skeleton recognition means 192. The body model 70 is a machine representation of the modeled target (e.g., user 18 in FIGS. 1A and 1B). A body model may include one or more data structures that include a set of variables that collectively define a simulated target in the language of a game or other application / operating system.

The target model can be configured in different ways without departing from the context of this disclosure. In some examples, the model may include one or more data structures that represent the target as a three-dimensional model containing rigid and / or deformable shapes or parts of the body. Each part of the body can be characterized as a mathematical primitive, examples of which include, but are not limited to, spheres, anisotropically scaled spheres, cylinders, anisotropic cylinders, smooth cylinders, parallelepipeds, beveled parallelepipeds, prisms, etc.

For example, the body model 70 of FIG. 3 includes body parts bp1-bp14, each of which represents a separate part of the simulated target. Each part of the body is a three-dimensional shape. For example, part bp3 is a rectangular prism that represents the left hand of the simulated target, and part bp5 is an octagonal prism that represents the left shoulder of the simulated target. The body model 70 is illustrative in this sense that the body model can contain any number of body parts, each of which can be any machine-understood representation of the corresponding part of the simulated target.

A model that includes two or more parts of the body may also include one or more joints. Each joint may enable one or more parts of the body to move relative to one or more other parts of the body. For example, a model representing a human target may include many rigid and / or deformable parts of the body, with some parts of the body representing the corresponding anatomical part of the body of the human target. In addition, each part of the model body may contain one or more structural elements (that is, “bones” or skeletal parts), and the joints are located at the intersection of adjacent bones. It should be understood that some bones may correspond to anatomical bones in the human target, and / or some bones may not have corresponding anatomical bones in the human target.

Bones and joints can all together constitute a skeletal model, which can be an integral part of the body model. In some embodiments, a skeletal model may be used in place of another type of model, such as model 70 in FIG. 3. The skeletal model may include one or more skeletal elements for each part of the body and joint between adjacent skeletal elements. An exemplary skeletal model 80 and an exemplary skeletal model 82 are shown in FIGS. 4 and 5, respectively. Figure 4 shows a skeletal model 80, viewed from the front, with joints j1-j33. 5 shows a skeletal model 82, viewed from the side, also with joints j1-j33. The skeletal model may include more or less joints without departing from the essence of this disclosure. Additional embodiments of the present system, explained later, operate using a skeletal model having 31 joints.

The above-described models of body parts and skeletal models do not limit examples of types of models that can be used as machine representations of a simulated target. Other models are also within the scope of this disclosure. For example, some models may include polygonal meshes, sections, heterogeneous rational B-splines, subdivision surfaces, or other high-order surfaces. The model may also include surface textures and / or other information to more accurately represent clothing, hair, and / or other aspects of the modeled target. The model may further include information related to the current pose, to one or more past poses, and / or to the physics of the model. It should be understood that the many different models that can be described are compatible with the target recognition, analysis, and tracking system described here.

Software pipelines are known for forming skeletal models of one or more users within the field of view of the capture device 20. One such system is disclosed, for example, in US patent application No. 12/876418, entitled “System for rapid probabilistic skeletal tracking”, filed September 7, 2010, which is incorporated herein by reference in its entirety. Under certain conditions, for example, when the user is close enough to the capture device 20, and at least one of the user's hands is distinguishable from other background noise, the program conveyor may additionally be able to form models of the hands and / or fingers of one or more users within sight.

6 is a flowchart of a software pipeline for recognizing and tracking a user's hand and / or fingers. At step 200, the conveyor receives a depth image from the capture device 20. The depth image of the user section is illustrated in FIG. 7 at 302. Each pixel in the depth image includes depth information, for example, as illustrated in FIG. 7 using a grayscale gradient. For example, at position 302, the user's left hand is closer to the grip device 20, as indicated by the darker area of the left hand. The capture device or depth camera captures the user's images in the observed scene. As described below, a user depth image can be used to determine user area distance information, user scale information, curvature, and user skeletal information.

At step 204, the skeleton recognition tool 192 from the pipeline evaluates the user’s skeletal model, as described above, to obtain a virtual skeleton from the depth image obtained in step 200. For example, in FIG. 7, the virtual skeleton 304 is shown as estimated based on the depth image, shown at position 302 of the user.

At step 208, the conveyor segments the user's hand or hands through the conveyor image segmentation means 194. In some examples, the image segmentation means 194 may further segment one or more areas of the body in addition to the hands. Segmentation of a user's hand includes the identification of a depth image area corresponding to a hand, wherein the identification is at least partially based on skeleton information obtained in step 204. FIG. 7 illustrates an example of segmentation of a user depth image into various regions 306 based on an estimated skeleton 304 , as indicated by differently shaded areas. 7 shows a localized area 308 of the hand corresponding to the raised right hand of the hand of the user.

Hands or areas of the body can be segmented or localized in a variety of ways, and this can be based on selected joints identified by the skeleton assessment described above. As one example, the detection and localization of the hand in the depth image can be based on the estimated joints of the wrist and / or the tips of the hand from the estimated skeleton. For example, in some embodiments, segmentation of the hand in the depth image can be performed using topographic search for the depth image around the joints of the hand, with location near local extremes in the depth image as candidates for fingertips. The image segmentation means 194 then segments the rest of the hand, taking into account the scale factor of the body size determined from the estimated skeleton, as well as the depth gaps for identifying boundaries.

As another example, a fill approach may be used to identify depth image areas corresponding to the user's hands. In the pouring approach, a depth image can be found from the start point and in the start direction, for example, the start point can be the wrist joint, and the start direction can be the direction from the elbow to the wrist joint. Adjacent pixels in the depth image can be iteratively estimated based on the projection onto the initial direction as a way of giving preference to points moving from the elbow to the tip of the hand, while depth constancy restrictions, such as depth gaps, can be used to identify boundaries or extreme values of the user's hands in the depth image. In some examples, distance threshold values may be used to limit the search on the depth map in both positive and negative directions relative to the initial direction based on fixed values, or, for example, a scale based on the estimated user size may be used.

As another example, a bounding sphere or other suitable bounding shape located on the basis of skeletal joints (for example, the wrist or joints of the tip of the hand) can be used to include all the pixels in the image depth to the point of breaking the depth. For example, a window can be moved around the bounding sphere to identify depth gaps that can be used to set the border of the hand region of the depth image.

The bounding shape method can also be used to place the bounding shape around the center of the palm, which can be iteratively identified. One example of such an iterative method of limiting is disclosed in a presentation by David Tuft entitled "Kinect Developer Summit at GDC 2011: Kinect for XBOX 360," annexed to this document as appendix 1, and K. Abe , H. Saito, S. Ozawa, entitled "3D drawing system via hand motion recognition from cameras", IEEE International Conference on Systems, Man, and Cybernetics, vol. 2, 2000, which is incorporated herein by reference in its entirety.

In general, such a method includes several iterative passes for discarding pixels from a model. In each pass, the method discards pixels outside a sphere or other shape centered on the wrist. Then, the method discards pixels that are too far from the tip of the hand (along the hand vector). The method then performs a border detection step for detecting an edge of a hand and removing disconnected islets. Illustrative steps of such a method are shown in the flowchart of FIG. At step 224, a bounding shape is formed around the center of the hand of the hand defined by the data of the joint of the hand from the skeleton recognition means 192. The bounding shape is large enough to cover the entire hand, and is three-dimensional. At 226, pixels are discarded outside the bounding shape.

It may happen that the user's hand is close to his body or to the second hand of the user in the depth image, and data from these other sections of the body will be initially included in the segmented image. Marking of connected components can be done to mark different midpoints in a segmented image. The midpoint, which is most likely the hand, is selected based on its size and the location of the joint of the hand. Unselected midpoints may be dropped. At step 230, pixels that are too far from the tip of the hand along the vector from the attached hand can also be discarded.

The skeletal data from the skeleton recognition means 192 may be interfered with, so the data for the wrist is further adjusted to identify the center of the wrist. This can be done by iterating over the image and measuring the distance from each pixel to the edge of the outline of the hand. The image segmentation means 194 can then search for the loaded average to determine the maximum / minimum distance. Thus, in step 232, for each pixel in the segmented image of the hand, the maximum distance along the x and y axes to the edge of the hand contour is identified and the minimum distance along the x and y axes to the edge of the hand contour. The distance to the edge is taken as the weight coefficient, and then the loaded average of the minimum determined distances over all measured pixels is taken to determine the probable center of the position of the hand within the image (step 234). Using the new center, the process can be repeated many times until the change in the center of the palm from the previous iteration is within a certain tolerance.

In some approaches, segmentation of the hands can be performed when the user lifts the hand from the body, over the body or in front of the body. Thus, the identification of the areas of the hands in the depth image may be less ambiguous, since the areas of the hands can be distinguished more easily from the body. The images of the hands are especially clear when the user's hand is oriented by the palm of the hand toward the gripping device 20, and at such a point, the signs of the hand can be detected as a contour. Signs may be interfering, but a contoured hand allows you to make some informed decisions about what the hand does, for example, by detecting the gaps between the fingers and observing the general shape of the hand and displaying them using many different approaches. The detection of these gaps and other signs allows you to recognize specific fingers and the general direction where this finger points.

It should be understood that the illustrative examples of hand segmentation described above are presented for the purpose of example and are not intended to limit the scope of this disclosure. In general, any methods for segmenting a portion of a hand or body can be used individually or in combination with each other and / or with one of the above illustrative methods.

Continuing the description of the conveyor in FIG. 7, step 210 includes retrieving a shape descriptor for a region, for example, a depth image area corresponding to a hand, as identified in step 208. The shape descriptor in step 210 is retrieved by the descriptor retrieval mechanism 196 and can represent any suitable representation of the area of the hand that is used to classify the area of the hand. In some embodiments, the shape descriptor may be a vector or a set of numbers used to codify or describe the shape of an area of a hand.

The descriptor extractor 196 may use any of a variety of filters in step 210 to retrieve the form descriptor. One filter may be referred to as a pixel classifier, which will now be described with reference to the flowchart of FIG. 9, the decision tree of FIG. 10 and the illustrations of FIG. 11-14. At 240, a pixel in the foreground of the segmented image is selected. These are pixels that are, at least nominally, considered to be part of the user's hand. A predefined size rectangle is taken around the selected pixel with the selected pixel in the center. In embodiments, the size of the rectangle can be selected 1.5 times the width of the normalized finger. A “normalized finger” is a user's finger that has been adjusted to a normalized size based on the size of the skeletal model and the detected distance from the user to the capture device 20. The following steps are performed sequentially for each pixel, which is nominally believed to be part of the hand.

At 242, the pixel classifier filter determines how many edges of the rectangle are crossed. The intersection is the place where the image goes from the foreground (above the hand) to the background (not above the hand). For example, FIG. 11A shows a finger 276, a selected pixel 278 on a finger, and the above-described rectangle 280 around a pixel of radius r. The rectangle intersects at two points along one edge; at points 281a and 281b. Points 281a, 281b are places where the image moves from the foreground (finger) to the background. All pixels 278 having two intersection points with the edges of their respective rectangles 280 are considered the fingertips (or part of the joint or arm, as explained below) in order to determine the midpoints of the hands, as explained below.

At step 246, the pixel classifier filter determines whether the intersections are on the same or different edges. As can be seen in FIG. 11B, a finger can intersect rectangle 280 along two adjacent edges, rather than along the same edge. This information will be used to determine the direction in which the finger is pointing, as explained below.

In contrast to the fingertip, a pixel that intersects its rectangle 280 at four points will be counted as a finger in order to determine the midpoints of the hands, as explained below. For example, FIG. 12 shows an example in which a selected pixel 278 is sufficiently distant from the fingertip and has four intersection points 281a, 281b, 281c and 281d with a rectangle 280.

At step 242 of the flowchart of FIG. 9 and at position 264 of the decision tree of FIG. 10, the pixel classifier filter checks how many edges of rectangle 280 are intersected. If the edges are not intersected, it is assumed that the selected pixel is within the user's palm at position 265. Thus, since the size of the rectangle 280 is chosen so that at least two edges are intersected if the pixel is on the finger or fingertip, if the pixel is located on the hand, and the edges are not crossed, it is assumed that the pixel lies on the palm. If the two edges are crossed, the filter goes to position 266 to check if the corners of the non-intersected edges are filled (hand) or empty (background), as explained below. If four edges are crossed at position 267, this is considered a finger, as explained above. If at position 268 the edges of rectangle 280 are crossed six times, this is considered invalid reading and discarded (step 250).

Again at position 266, where the two edges are intersected, this may be the tip of the finger, but it may also be the space between two adjacent fingers. The pixel classifier filter therefore checks the angles of the non-intersected edges (block 248). When the corners of the non-intersected edges are filled, this means that the rectangle is under the wrist at these corners, and the intersection points define the depression between adjacent fingers. Conversely, when the corners of the non-intersected edges are empty (as shown in the illustration corresponding to position 266), this means that the rectangle is in the background pixels at these corners, and the intersection points define part of the hand.

If the corners are empty, the pixel classifier filter at 269 checks to see if the distance between the intersection points, called the chord length, is less than the maximum width of the finger (step 252). Thus, where there are two intersection points, this may be the tip of the finger, as shown in figa. However, the pixel may also be part of the hand or part of the hand, for example, a joint, as shown in Fig. 13. If so, the length of the 282 chord may be greater than the maximum width of the finger. If so, it is believed that the pixel 278 for which rectangle 280 is being examined is located on the arm or joint at position 271 (Fig. 10).

In addition to identifying a fingertip or finger, a two-point intersection or a four-point intersection can also reveal the direction in which the fingertip / finger points. For example, in FIG. 11A, there were two intersections with less than the maximum width of the finger, so it was determined that the pixel 278 is at the tip of the finger. However, given this intersection, conclusions can be drawn about the direction in which the tip of the finger points. The same can be said for the finger shown in FIG. 11A shows a finger 276 pointing straight up. But the tip of the finger 276 may also indicate other ascending directions. Information from other points near point 278 on finger tip 276 can be used for additional directional findings.

11B shows a two-point intersection that provides additional conclusions about the direction in which the finger / fingertip is pointing. Thus, the direction can be inferred from the ratio of the distances to the joint angle. In other words, the chord length between points 281a and 281b defines the hypotenuse of the triangle, which also includes the sides between points 281a, 281b and the joint angle. It can be concluded that the finger points in a direction perpendicular to the hypotenuse.

It may happen that the hand is held with two fingers together, with three fingers together or four fingers together. Thus, after the steps described above are completed using rectangle 280 for each pixel in the hand, the process can be repeated using rectangle 280, which is slightly larger than the maximum width of two fingers together, and then repeated again using rectangle 280, which slightly larger than the maximum width of three fingers together, etc.

Once the pixel classifier filter data is collected, the pixel classifier filter then attempts to create a hand model from the data in step 258 (Fig. 9). There are small identified areas, or midpoints, such as, for example, illustrative areas that are the tip of the finger, and the area that is the palm, and the idea of the center of the palm from the segmentation stage of the hand. The classifier 198 then examines the midpoints of the finger, not classified as fingertips, but because they intersected at four points, classified as fingers. Orientation directions have also been identified for finger and fingertip areas. If the midpoint of the finger is aligned with the midpoint of the fingertip and they are in the correct relative position relative to each other, the algorithm connects these midpoints as belonging to the same finger.

Then, the orientation of the finger region is used to project where the joint of that finger is based on the size of the skeleton and the size of the finger. The size, position and orientation of any identified hollows between the fingers can also be used to confirm a specific model of the hand. The projected position of the joint then connects to the palm of the hand. Upon completion, the pixel classifier determines the skeletal model 284 of the hand, two examples of which are shown in FIG. The model, which can be called a “reduced skeletal model of the hand traces hand segments related to the fingertips, joints connect the hand and fingers and the central bone to the palm,” includes the midpoints of the fingertips connected to the midpoints of the finger connected to midpoints of the joint connected to the midpoint of the palm. Data regarding the known geometry and possible positions of the hand from the known position of the hand can also be used to check or dispute certain positions of the fingertip, finger, joint and / or positions of the midpoints of the palm, as well as to discard data of central points that can be determined as not being part of the hand.

The above will create a model of the hand, even if one or more sections of the hand are absent in the model. For example, the finger could be closed or too close to the body of the user or another hand that needs to be detected. Or the user may be missing a finger. The pixel classification filter will create a model of the hand using the positions of the fingers and the hand that it detects.

Another filter, which may be performed in addition to or instead of a pixel classification filter, may be referred to as a curvature analysis filter. This filter focuses on curvature along the boundaries of the segmented contour of the hand to identify peaks and troughs in an attempt to differentiate fingers. As shown in the flowchart of FIG. 15, in step 286, starting from the first pixel, eight surrounding pixels are examined to determine which pixel is next on the wrist. Thus, each pixel is assigned a value from 0 to 7, characterizing the connectivity between this pixel and the next. A chain of these numbers is built around the contour of the hand, which gives the border of the hand. These values can be converted to angles and contours around the hand in step 288 to provide a graph of the contour and peaks of the hand, such as shown in FIG. These stages of the formation of the contours and peaks of the hand are described, for example, in the article by F. Leymarie, M.D. Levin (M.D. Levine), entitled "Curvature morphology", Computer Vision and Robotics Laboratory, McGill University, Montreal, Quebec, Canada, 1988, which is incorporated herein by reference in its entirety.

Peaks around the contour of the hand are identified at step 289, and each is analyzed relative to different signs of the peak. A peak can be defined by a start point, a peak, and an end point. These three points can form a triangle, as explained below. Various peak symptoms that can be investigated include, for example:

peak width;

maximum height of a given peak;

the average height of the samples of curvature within the peak;

peak shape ratio (maximum height / average height);

peak area;

distance from the hand to the peak;

direction from the elbow to the hand (x, y and z);

vector product of the peak direction and the direction of the hand (how small is the angle between the direction of the hand and the direction of the peak); and

the vector product of the vector between the start point of the peak and the maximum point and the vector between the maximum point and the end point.

This information can be passed through various machine learning techniques at step 290, such as, for example, the support vector method to differentiate fingers and hand. The support vector method is known and described, for example, in articles by C. Cortes and V. Vapnik, "Support-Vector Networks, Machine Learning", 20 (3): 273-297, September 1995 and Vladimir N. Vapnik, "The Nature of Statistical Learning Theory", Springer, New York, 1995, both of which are incorporated herein by reference in their entirety. In embodiments, the interference data can be smoothed using a hidden Markov model to maintain the state of the hands and filter noise.

The filters described above may be referred to as contour filters because they examine data related to the contour of the hand. An additional filter that can be used is a histogram filter and is referred to as a depth filter because it uses depth data to build a model of the hand. This filter can be used in addition to or instead of the filters described above and can be especially useful when the user's hand points in the direction of the image pickup device 20.

In the histogram filter, a histogram of distances in the area of the hand can be created. For example, such a histogram may include fifteen intervals, where each interval includes the number of points in the area of the hand, the distance of which in the direction of the z axis (depth) from the closest point to the camera is within a certain range of distance corresponding to this interval. For example, the first interval in such a histogram may include the number of points in the area of the hand, the distance from which to the midpoint of the hand is between 0 and 0.40 cm, the second interval includes the number of points in the area of the hand, the distance from which to the middle the point of the hand is between 0.40 and 0.80 cm, etc. Thus, a vector can be created to codify the shape of the hand. Such vectors can then be normalized, for example, based on estimated body size.

In another illustrative approach, a histogram can be created based on distances and / or angles from points in the area of the wrist to the joint, bone segment or plane of the palm from the estimated skeleton of the user, for example, the elbow joint, wrist joint, etc. 17 illustrates two graphs showing histograms defined for a closed hand and an open hand.

It should be understood that examples of form descriptor filter are illustrative in nature and are not intended to limit the scope of this disclosure. In general, any suitable shape descriptors for an area of the hand can be used individually or in combination with each other and / or one of the above illustrative methods. For example, shape descriptors, such as the histograms or vectors described above, can be combined and combined, combined and / or combined into large vectors, etc. This may make it possible to identify new patterns that could not be identified when they were isolated. These filters can be extended by using historical frame data, which can indicate, for example, whether the identified finger deviates too far from that finger identified in the previous frame.

Fig. 18 shows a supervisor filter for combining the results of the various filters described above. For example, a pixel classifier filter can be used to obtain a model of the hand and fingers. In addition, the pixel classifier, the curvature analysis filter, the depth histogram filter, and possibly other hand filters not shown in FIG. 19 can be processed as described above and further processed, for example, by filtering temporal consistency (for example, low-pass filter) and smoothing techniques to obtain the positions of the hand and fingers. As mentioned above, the contour used in the various filters described herein can be scaled to be invariant with respect to the size of the hand and the distance from the sensor by knowing the distance from the user to the camera and the size of the hand drawn from its analyzed skeleton.

In addition to the open or closed states of the hand, the present technology can be used to identify a specific orientation of the fingers, for example, pointing in a specific direction with one or more fingers. The technology can also be used to identify different positions of the hands, oriented at different angles within the Cartesian space with the x, y, z axes.

In embodiments, the various stages of postclassification filtering can be used to increase the accuracy of the estimates of the position of the hand and fingers in step 216 (FIG. 6). For example, the temporal consistency filtering step can be applied to predicted hand and finger positions between successive depth image frames to smooth out predictions and reduce temporary jitter, for example, due to random hand movements, sensor interference, or random classification errors. Thus, a plurality of positions of the wrist and fingers can be estimated based on the plurality of depth images from the gripper or sensor, and temporary filtering of the plurality of evaluations to evaluate the positions of the wrist and fingers can be performed.

At step 220, the conveyor shown in FIG. 6 may issue a response in each frame based on the estimated condition of the hand. For example, a command may be issued to a console of a computing system, such as a console 12 of computing system 10. As another example, a response may be provided to a display device, such as display device 16. Thus, the estimated movements of the user, including the estimated states of the hands, can be converted into commands for the console 12 of the system 10 so that the user can interact with the system as described above. In addition, the method and processes described above can be implemented to determine the state estimates of any part of the user's body, for example, the mouth, eyes, etc. For example, the position of a part of a user's body can be estimated using the methods described above.

The present technology provides an opportunity for a wide variety of interactions with a natural user interface system, as, for example, shown in FIGS. 1A-1C. There is a wide range of natural interactions that are based on the movements of the hand and fingers or combine both large body movements and small hand manipulations, which are desirable for creating new recognized gestures, greater interactivity and exciting games. These applications and interactions include, but are not limited to, the following:

Providing cursor positions with high accuracy - through accurate recognition and tracking of the user's pointing finger, the natural user interface system can accurately determine where the user points on the screen in relation to the cursor positioning (Fig. 1B).

Finger targeting - in general, accurate recognition and tracking of a user's finger or fingers can be used in any of a variety of ways to improve control and interaction with the natural user interface system and game or other applications working with the natural user interface system. The recognition of various configurations of the hands can be used as recognized gestures, for example, such as, but not limited to, finger counting, finger up, finger down, “all right” sign, “horn” sign (index finger and little finger up), the “shack” sign (“hang loose”), the “live long and prosper” sign from Star Trek®, the only raised finger and others. Each of these can be used to control user interface interactions.

Virtual buttons (with tangible feedback) - precise recognition and tracking of individual fingers allows applications to use many virtual buttons that further enhance the perception of the natural user interface.

Thumb control - through the perception of orientation and reliable detection of the thumb, the hand can act as a controller - the orientation of the thumb controls the orientation of the controller, pressing the thumb to the hand is recognized as a button press.

Clamp for selection - precise recognition and tracking of individual fingers allows applications to use the movement of the clamp between the thumb and the other finger to influence some control function or application metric.

One or more finger directions — accurate recognition and tracking of individual fingers allows application programs to use relative finger positions as a control metric or perform some other application metric.

Writing, drawing, sculpting - precise recognition and tracking of individual fingers allows applications to interpret the user holding the pen or brush, and how the pen or brush moves with the movement of individual fingers. Recognition of such movements allows the user to form letters, handwritten text, sculpt and / or draw images.

Typing - Accurate recognition and tracking of individual fingers allows application programs to perform print movements that are interpreted by the natural user interface system or application program as keystrokes on a virtual keyboard to enter a character and word on the screen, or provide control or application information to the natural user interface system or application program .

Hand rotation tracking - Accurate recognition and tracking of individual fingers allows applications to accurately identify hand rotation.

Doll control - displays the skeleton of fingers on the puppet animation control system. Alternatively, the display of the skeleton of the fingers can be used to directly control the virtual object in the same form as the physical puppet on the physical rope is controlled.

Rotating the knob or opening the combination lock - precise recognition and tracking of individual fingers allows the user to select and rotate the virtual knob or open the virtual combination lock. Such a combination lock can be used to provide or deny access to protect network or stored resources.

Weapon shooting - using the detection of fingers and hands as a weapon controller - the index finger defines the target, and the thumb presses the button to fire.

Click gesture - Detect and use a finger gesture in the air for virtual interaction.

Gesture of an open palm - using an open palm to display a map image, meaning a modal change between the point of view of the first person and third person. The index finger can be used on an open palm (like a mouse or finger on the touch screen) to scroll and move through the virtual space.

Foot control - using the index and middle fingers (with the downward hand) to control the character’s legs with the image of running, jumping or striking. This gesture can be combined with an open palm gesture to indicate a modal change between whole body interactions and a user interface or navigation. For example, in an action-driven story game, a player can use the entire body control to participate in the fighting, then use the open hand gesture to switch to the map image, and use the index and middle fingers to depict running on the ground.

Other interactions of fingers and hands are contemplated.

19A shows an illustrative embodiment of a computing environment that can be used to interpret one or more gestures in a target recognition, analysis, and tracking system. A computing environment such as computing environment 12 described above with reference to FIG. 1A-2 may be a multimedia console 600, such as a game console. As shown in Figure 19A, the multimedia console 600 has a central processing unit (CPU) 601 having a level 1 cache 602, level 2 cache 604, and flash programmable read-only memory (ROM) 606. Level 1 cache 602 and level 2 cache 604 temporarily store data and therefore reduce the number of memory access cycles, thereby increasing processing speed and throughput. A central processor 601 may be provided having more than one core and thus additional caches 602 and 604 of level 1 and level 2. Flashing memory 606 may store executable code that is loaded during the initial phase of the boot process when the multimedia console 600 is turned on .

A graphics processor (GPU) 608 and a video encoder / video codec (encoder / decoder) 614 form a video processing pipeline for high-speed processing of high-resolution graphics. Data is transferred from the GPU 608 to the video encoder / video codec 614 via the bus. The video processing pipeline provides data to the audio / video (A / V) port 640 for transmission to a television or other display. A memory controller 610 is coupled to a graphics processor 608 to provide processor access to various types of memory 612, such as, but not limited to, random access memory (RAM; RAM).

The multimedia console 600 includes an input / output (I / O) controller 620, a system control controller 622, an audio processing unit 623, a network interface controller 624, a first USB host controller 626, a second USB controller 628, and a front panel I / O subsystem 630 which are preferably implemented on module 618. USB controllers 626 and 628 serve as hosts for the I / O controllers 642 (1) -642 (2), wireless adapter 648, and external memory device 646 (eg, flash memory, external CD / DVD ROM, removable media, etc.). Network interface 624 and / or wireless adapter 648 provides access to a network (e.g., the Internet, home network, etc.) and can be any of a wide variety of components of various wired or wireless adapters, including an Ethernet card, a modem , Bluetooth technology module, cable modem, etc.

System memory 643 is provided for storing application program data that is loaded during the boot process. A 644 drive is provided, which may include a DVD / CD drive, a hard disk drive or other removable media drive, etc. The drive 644 may be internal or external to the multimedia console 600. Through the multimedia console 600 through the drive 644, you can access application data for execution, playback, etc. The drive 644 is connected to the I / O controller 620 via a bus, such as a Serial ATA interface bus or other high-speed connection (for example, an IEEE 1394 standard connection).

The system control controller 622 provides a variety of utility functions related to guaranteeing the availability of the multimedia console 600. The audio information processing unit 623 and the audio codec 632 form an appropriate audio information processing pipeline with high fidelity and stereo processing. Sound data is transferred between the audio information processing unit 623 and the audio codec 632 via a communication line. The audio information processing pipeline provides data on a 640 A / V port for playback via an external audio player or device having sound reproduction capabilities.

The front-panel I / O subsystem 630 supports the functionality of the power-on button 650 and the eject button 652, as well as any light emitting diodes (LEDs) or other indicators provided on the outer surface of the multimedia console 600. A power supply system module 636 provides power to components of the multimedia console 600. A fan 638 cools circuits within the multimedia console 600.

A central processor 601, a graphics processor 608, a memory controller 610, and various other components within the multimedia console 600 are interconnected via one or more buses, including serial and parallel buses, a memory bus, peripheral bus, and a processor or local bus using any of a variety of bus architectures. As an example, such an architecture may include a peripheral device connection bus (PCI standard), a PCI-Express standard bus, etc.

When the multimedia console 600 is turned on, application data can be downloaded from system memory 643 to memory 612 and / or caches 602, 604 and executed on the central processor 601. The application program can provide a graphical user interface that provides a consistent user interface when navigating through the information various kinds of content available on the multimedia console 600. When operating, application programs and / or other informational content contained in a drive 644 could s launched or played from the drive 644 to provide additional functionality of the multimedia console 600.

The multimedia console 600 can be used as a standalone system by simply connecting the system to a television or other display. In this standalone mode, the multimedia console 600 allows one or more users to interact with the system, watch a movie, or listen to music. However, when integrating the broadband available through the network interface 624 or the wireless adapter 648, the multimedia console 600 can also be used as a component of a larger network community.

When the multimedia console 600 is turned on, the set amount of hardware resources is reserved for system use by the operating system of the multimedia console. These resources may include memory redundancy (e.g., 16 MB), CPU and GPU cycles (e.g., 5%), network bandwidth (e.g., 8 KB), etc. Because these resources are reserved at boot time, the reserved resources do not exist from the point of view of the application.

In particular, the memory reservation is preferably large enough to contain a launch kernel, parallel system applications, and drivers. The redundancy of the central processor is preferably constant, therefore, if the reserved loading of the central processor is not used by system applications, the inactive thread will consume any unused cycles.

Regarding GPU redundancy, simple messages generated by system applications (for example, pop-ups) are reproduced by using a GPU interrupt to schedule code to implement a pop-up in an overlay. The amount of memory required for the overlay depends on the size of the overlay area, and the overlay is preferably scaled depending on the resolution of the screen. When the full user interface is used by the parallel system application, it is preferable to use a resolution independent of the resolution of the application. A frequency divider can be used to establish this resolution in such a way as to eliminate the need to change the frequency and cause the television signal to resynchronize.

After the multimedia console 600 is loaded and system resources are reserved, parallel system applications are executed to provide system functionality. System functionality is encapsulated in a variety of system applications that are executed in the reserved system resources described above. The kernel of the operating system identifies threads, which are system threads of application programs and threads of game applications. System application programs are preferably scheduled to be executed on the central processor 601 at predetermined times and intervals to provide a consistent mapping of system resources to the application program. Scheduling should minimize cache disruption for a game application running on the console.

When audio information is required by a parallel system application, the processing of audio information is scheduled asynchronously with the gaming application due to time sensitivity. The application manager of the multimedia console (described below) controls the audio level of the gaming application (for example, drowns out, weakens) when system applications are active.

Data input devices (e.g., controllers 642 (1) and 642 (2)) are shared between gaming applications and system applications. Data input devices are not reserved resources, but must be switched between system applications and game application programs so that each of them will have the input focus of the device. The application manager preferably controls the switching of the input stream, not knowing what information the game application has, and the driver maintains state information regarding switching the input focus. Cameras 26, 28 and capture device 20 may determine additional data input devices for console 600.

FIG. 19B illustrates another illustrative embodiment of a computing environment 720, which may be the computing environment 12 shown in FIG. 1A-2, used to interpret one or more positions and movements in a target recognition, analysis and tracking system. The computing system environment 720 is just one example of a suitable computing environment and is not intended to introduce any limitation as to the scope or functionality of the disclosed subject matter. No one should interpret computing environment 720 as having any dependency or requirements related to any one component or combination of components shown in exemplary environment 720. In some embodiments, the various illustrated computing elements may include circuitry configured to implement particular aspects of the present disclosure. For example, the term “circuitry” as used in the disclosure may include specialized hardware components configured to perform a function (s) via firmware or switches. In other exemplary embodiments, the term “circuit” may include a general-purpose processor, memory, etc., executed by program instructions that embody a logic circuit operable to perform functions (functions). In illustrative embodiments, when the circuit includes a combination of hardware and software, the developer can write the source code embodying the logic circuit, and the source code can be compiled into computer-readable code that can be processed by a general-purpose processor. Because one skilled in the art can understand that the technical field has evolved to a point where there is little difference between hardware, software, or a combination of hardware / software, the choice between hardware and software to perform the given functions is a configuration choice left to the developer. More specifically, one skilled in the art can understand that a software process can be converted to an equivalent hardware structure, and the hardware structure itself can be converted to an equivalent software process. Thus, the choice between hardware implementation and software implementation is a configuration choice and left to the designer.

On figv computing environment 720 contains a computer 741, which usually contains different machine-readable media. Computer-readable media can be any available media that can be accessed using a computer 741, and includes both volatile and non-volatile media, removable and non-removable media. System memory 722 includes computer storage media in the form of volatile and / or non-volatile memory, such as read-only memory (ROM; ROM) 723 and random access memory (RAM; RAM) 760. Basic input / output system (BIOS) 724, containing basic routines that help move information between items in the computer 741, for example, during startup, is usually stored in read-only memory (ROM; ROM) 723. Random access memory (RAM; RAM) 760 typically contains data and / or program mm modules that are instantly available to processor 759 and / or which are currently being processed by processor 759. As an example, but not limitation, FIG. 19B illustrates an operating system 725, application programs 726, other program modules 727, and program data 728. FIG. 19B further includes a graphics processor (GPU) 729 having corresponding video memory 730 for high-speed processing and storage of high-resolution graphics. The graphics processor 729 may be connected to the system bus 721 via the graphic interface 731.

Computer 741 may also include other removable / non-removable, volatile / non-volatile computer storage media. By way of example only, FIG. 19B illustrates a hard disk drive 738 that reads or writes to non-removable non-volatile magnetic media, a magnetic drive 739 that reads or writes to a removable non-volatile magnetic disk 754, and an optical drive 740 that reads or writes to a removable non-volatile optical disk 753, such as read-only memory on a compact disc (CD-ROM) or other optical medium. Other removable / non-removable, volatile / non-volatile computer storage media that may be used in an illustrative environment include, but are not limited to, magnetic tape cassettes, flash memory cards, digital versatile disks, digital video tape, semiconductor RAM, semiconductor ROM, etc. .P. A hard disk drive 738 is usually connected to the system bus 721 via a non-removable memory interface, such as interface 734, and a magnetic drive 739 and an optical drive 740 are usually connected to the system bus 721 via a removable memory interface, such as interface 735.

The disk drives and associated computer storage media described above and illustrated in FIG. 19B provide storage of machine-readable instructions, data structures, program modules and other data for computer 741. In FIG. 19B, for example, a hard disk drive 738 is illustrated as comprising an operating system 758, application programs 757, other program modules 756, and program data 755. It should be noted that these components may either be the same or different from the operating system 725, application software Programs 726, other program modules 727, and program data 728. The operating system 758, application programs 757, other program modules 756, and program data 755 are given other numbers here to illustrate that they are at least other copies. The user can enter commands and information into the computer 741 through data input devices, such as a keyboard 751 and a pointing device 752, commonly referred to as a mouse, trackball or touch keyboard. Other data input devices (not shown) may include a microphone, joystick, gaming keyboard, satellite dish, scanner, and the like. These and other input devices are often connected to the processor 759 via a user input interface 736, which is connected to the system bus, but can be connected by other interface and bus structures, such as a parallel port, a game port, or a universal serial bus (USB). Cameras 26, 28 and the capture device 20 may determine additional input devices for the console 700. A monitor 742 or other type of display device is also connected to the system bus 721 via an interface such as video interface 732. In addition to the monitor, computers may also include other peripheral output devices, such as speakers 744 and printer 743, which can be connected via an interface 733 of peripheral output devices.

Computer 741 may operate in a networked environment using logical connections to one or more remote computers, such as remote computer 746. Remote computer 746 may be a personal computer, server, router, network personal computer, peer-to-peer device, or other common network node and typically includes incorporating many or all of the elements described above with respect to computer 741, although only memory device 747 has been illustrated in FIG. The logical connections shown in FIG. 19B include a local area network 745 and a wide area network 749, but may also include other networks. Such networking environments are commonplace in offices, enterprise-wide computer networks, intranets, and the Internet.

When used in a local area network environment, computer 741 is connected to a local area network 745 via a network interface or adapter 737. When used in a wide area network environment, computer 741 typically includes a modem 750 or other means for establishing communication over wide area network 749, such as the Internet. The modem 750, which may be internal or external, may be connected to the system bus 721 via a user input interface 736 or other appropriate mechanism. In a networked environment, program modules depicted relative to computer 741, or parts thereof, may be stored in a remote storage device. By way of example, but not limitation, FIG. 19B illustrates remote application programs 748 as residing in a memory device 747. It should be understood that the network connections shown are illustrative, and other means of establishing a communication link between computers can be used.

The foregoing detailed description of the system according to the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the system of the invention to the exact form disclosed. Many modifications and changes are possible in light of the above idea. The described embodiments have been selected to best explain the principles of the system according to the invention and its practical application, thereby making it possible for those skilled in the art to make best use of the system according to the invention in various embodiments and with various modifications suitable for the particular use in question. . It is understood that the scope of the invention is defined by the appended claims.

Claims (17)

1. The method of forming a model of the user's hand, which includes one or more fingers, containing stages in which:
(a) receiving position data representing a position of a user interacting with the sensor, wherein the position data includes at least one of depth data and image data representing a user's hand; and
(b) analyze position data to identify a user's hand in position data, wherein step (b) comprises the steps of:
(b) (1) analyze depth data from position data obtained in step (a) to segment position data into hand data, and
(b) (2) retrieving a set of feature descriptors by applying one or more filters to the hand image data identified in step (b) (1), the one or more filters analyzing hand image data compared to image data outside the borders of the hand to distinguish the signs of the hand, including the shape and orientation of the hand.
2. The method according to claim 1, further comprising the steps of executing the application program that receives commands through the reading mechanism and affecting the control action in the application program based on the position of the hand identified in step (b).
3. The method of claim 1, further comprising the steps of: executing a gaming application receiving commands through a reading mechanism, and affecting an action in the gaming application based on the position of the hand identified in step (b).
4. The method of claim 1, wherein step (b) (1) comprises analyzing the midpoints constructed from the image data to determine the location of the best possible hand variant.
5. The method according to claim 4, in which step (b) (1) comprises the step of analyzing the best possible variant of the hand to determine the best possible variant of the center of the hand.
6. The method of claim 1, wherein step (b) (2) comprises the steps of applying a pixel classifier, comprising the steps of:
select pixels within the border of the handle shape of the hand,
build a rectangle of a predetermined size around each pixel, with each rectangle being built in the plane of the shape descriptor,
determining intersection points with each rectangle at which image data transitions between a foreground point and a background point, and
identify the hand and fingers from the analysis of the intersection points of each rectangle for each examined pixel.
7. The method according to claim 1, in which step (b) (2) comprises the steps of applying a curvature analysis filter, comprising the steps of:
select pixels along the border of the handle shape of the hand,
examining the plurality of pixels surrounding the selected pixel, and assigning a value to the selected pixel indicating which surrounding pixel is also located along the border of the shape descriptor,
converting values to angles and contours around the hand, including peaks and troughs, and
determine which of the peaks represent the fingers of the hand.
8. The method of claim 1, wherein step (b) (2) comprises the steps of applying a histogram filter, comprising the steps of constructing a histogram of the distances between the plurality of points in the shape descriptor and the device capturing image data.
9. A system for forming a model of a user's hand, including one or more fingers, the system includes a reading mechanism operably connected to a computing device, the system comprising:
skeleton recognition means for recognizing at least a portion of a user’s skeleton from received data including at least one of depth data and image data;
image segmentation means for segmenting one or more areas of the body into an area representing a user's hand; and
descriptor extraction means for retrieving data representing a hand including one or more fingers and an orientation of the hand, wherein the descriptor extraction means uses a plurality of filters to analyze pixels in an area representing the hand, each filter in this plurality of filters the position and orientation of the hand, while the descriptor extractor combines the results of each filter to achieve the best estimate of the position and orientation of the hand.
10. The system of claim 9, wherein said plurality of descriptor extractor filters includes one or more filters optimized to identify the position and orientation of the hand as a contour relative to a device capturing received data.
11. The system of claim 9, wherein said plurality of descriptor extractor filters includes one or more filters optimized to identify the position and orientation of the hand when it points toward or away from a device capturing received data.
12. The system of claim 9, wherein said plurality of descriptor extractor filters includes classifier means for analyzing the hand as a contour with respect to the reading mechanism, while the classifier means selects pixels within the area representing the user's hand, constructs a rectangle of a predetermined size around each pixel, where each rectangle is built in the plane of the contour, determines the intersection points with each rectangle at which image data passes I’m waiting for the foreground and background points, and identifies the hand and fingers from the analysis of the intersection points of each rectangle for each examined pixel.
13. The system of claim 12, wherein the classifier tool identifies a midpoint representing a fingertip when two intersection points are identified on the rectangle and the distance between the intersection points is too small to represent a palm.
14. The system of claim 13, wherein the location of the two intersection points on the same or on different sides of the rectangle indicates the orientation of the identified fingertip.
15. The system of claim 12, wherein the classifier tool identifies the midpoint representing the finger when four intersection points are identified on the rectangle.
16. The system of claim 12, wherein the classifier tool identifies the midpoint representing the palm of the hand when two intersection points are identified on the rectangle and the distance between the intersection points is too large to represent the tip of the finger.
17. The system of claim 12, wherein said rectangle constructed around a particular pixel is a first rectangle of a first size, and the classifier tool further builds around this particular pixel a second rectangle of a second size larger than the first size to detect a condition, when which fingers are together.
RU2013154102/08A 2011-06-06 2012-06-04 System for recognition and tracking of fingers RU2605370C2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US201161493850P true 2011-06-06 2011-06-06
US61/493,850 2011-06-06
US13/277,011 US8897491B2 (en) 2011-06-06 2011-10-19 System for finger recognition and tracking
US13/277,011 2011-10-19
PCT/US2012/040741 WO2012170349A2 (en) 2011-06-06 2012-06-04 System for finger recognition and tracking

Publications (2)

Publication Number Publication Date
RU2013154102A RU2013154102A (en) 2015-06-10
RU2605370C2 true RU2605370C2 (en) 2016-12-20

Family

ID=47262102

Family Applications (1)

Application Number Title Priority Date Filing Date
RU2013154102/08A RU2605370C2 (en) 2011-06-06 2012-06-04 System for recognition and tracking of fingers

Country Status (9)

Country Link
US (1) US8897491B2 (en)
EP (1) EP2718900A4 (en)
JP (1) JP6021901B2 (en)
KR (1) KR101956325B1 (en)
AU (1) AU2012268589B2 (en)
CA (1) CA2837470C (en)
MX (1) MX2013014393A (en)
RU (1) RU2605370C2 (en)
WO (1) WO2012170349A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2662399C1 (en) * 2017-03-17 2018-07-25 Алексей Александрович Тарасов System and method for capturing movements and positions of human body and parts of human body

Families Citing this family (83)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110254765A1 (en) * 2010-04-18 2011-10-20 Primesense Ltd. Remote text input using handwriting
US8639020B1 (en) 2010-06-16 2014-01-28 Intel Corporation Method and system for modeling subjects from a depth map
JP5820366B2 (en) * 2010-10-08 2015-11-24 パナソニック株式会社 Posture estimation apparatus and posture estimation method
US9857868B2 (en) 2011-03-19 2018-01-02 The Board Of Trustees Of The Leland Stanford Junior University Method and system for ergonomic touch-free interface
US8840466B2 (en) 2011-04-25 2014-09-23 Aquifi, Inc. Method and system to create three-dimensional mapping in a two-dimensional game
JP5670255B2 (en) * 2011-05-27 2015-02-18 京セラ株式会社 Display device
JP6074170B2 (en) 2011-06-23 2017-02-01 インテル・コーポレーション Short range motion tracking system and method
JP5864144B2 (en) 2011-06-28 2016-02-17 京セラ株式会社 Display device
US8854433B1 (en) 2012-02-03 2014-10-07 Aquifi, Inc. Method and system enabling natural user interface gestures with an electronic system
US9292089B1 (en) * 2011-08-24 2016-03-22 Amazon Technologies, Inc. Gestural object selection
CN103890782B (en) * 2011-10-18 2018-03-09 诺基亚技术有限公司 Method and apparatus for gesture identification
US9628843B2 (en) * 2011-11-21 2017-04-18 Microsoft Technology Licensing, Llc Methods for controlling electronic devices using gestures
US9292767B2 (en) 2012-01-05 2016-03-22 Microsoft Technology Licensing, Llc Decision tree computation in hardware utilizing a physically distinct integrated circuit with on-chip memory and a reordering of data to be grouped
US8782565B2 (en) * 2012-01-12 2014-07-15 Cisco Technology, Inc. System for selecting objects on display
US8836768B1 (en) 2012-09-04 2014-09-16 Aquifi, Inc. Method and system enabling natural user interface gestures with user wearable glasses
US9734393B2 (en) * 2012-03-20 2017-08-15 Facebook, Inc. Gesture-based control system
US8933912B2 (en) * 2012-04-02 2015-01-13 Microsoft Corporation Touch sensitive user interface with three dimensional input sensor
US9477303B2 (en) 2012-04-09 2016-10-25 Intel Corporation System and method for combining three-dimensional tracking with a three-dimensional display for a user interface
US9747306B2 (en) * 2012-05-25 2017-08-29 Atheer, Inc. Method and apparatus for identifying input features for later recognition
US9111135B2 (en) * 2012-06-25 2015-08-18 Aquifi, Inc. Systems and methods for tracking human hands using parts based template matching using corresponding pixels in bounded regions of a sequence of frames that are a specified distance interval from a reference camera
US8934675B2 (en) 2012-06-25 2015-01-13 Aquifi, Inc. Systems and methods for tracking human hands by performing parts based template matching using images from multiple viewpoints
US20140007019A1 (en) * 2012-06-29 2014-01-02 Nokia Corporation Method and apparatus for related user inputs
US20140022171A1 (en) * 2012-07-19 2014-01-23 Omek Interactive, Ltd. System and method for controlling an external system using a remote device with a depth sensor
US20140073383A1 (en) * 2012-09-12 2014-03-13 Industrial Technology Research Institute Method and system for motion comparison
US9658695B2 (en) * 2012-11-08 2017-05-23 Cuesta Technology Holdings, Llc Systems and methods for alternative control of touch-based devices
US9671874B2 (en) 2012-11-08 2017-06-06 Cuesta Technology Holdings, Llc Systems and methods for extensions to alternative control of touch-based devices
WO2014080487A1 (en) * 2012-11-22 2014-05-30 富士通株式会社 Information processing device, body part determination program and body part determination method
US10126820B1 (en) * 2012-11-29 2018-11-13 Amazon Technologies, Inc. Open and closed hand detection
US9092665B2 (en) 2013-01-30 2015-07-28 Aquifi, Inc Systems and methods for initializing motion tracking of human hands
US9129155B2 (en) * 2013-01-30 2015-09-08 Aquifi, Inc. Systems and methods for initializing motion tracking of human hands using template matching within bounded regions determined using a depth map
JP6048189B2 (en) * 2013-02-08 2016-12-21 株式会社リコー Projection system, image generation program, information processing apparatus, and image generation method
US9201499B1 (en) * 2013-02-11 2015-12-01 Amazon Technologies, Inc. Object tracking in a 3-dimensional environment
US9052746B2 (en) 2013-02-15 2015-06-09 Microsoft Technology Licensing, Llc User center-of-mass and mass distribution extraction using depth images
US8994652B2 (en) * 2013-02-15 2015-03-31 Intel Corporation Model-based multi-hypothesis target tracker
US9275277B2 (en) * 2013-02-22 2016-03-01 Kaiser Foundation Hospitals Using a combination of 2D and 3D image data to determine hand features information
US20140245200A1 (en) * 2013-02-25 2014-08-28 Leap Motion, Inc. Display control with gesture-selectable control paradigms
US9135516B2 (en) 2013-03-08 2015-09-15 Microsoft Technology Licensing, Llc User body angle, curvature and average extremity positions extraction using depth images
US9092657B2 (en) 2013-03-13 2015-07-28 Microsoft Technology Licensing, Llc Depth image processing
US9142034B2 (en) 2013-03-14 2015-09-22 Microsoft Technology Licensing, Llc Center of mass state vector for analyzing user motion in 3D images
US9159140B2 (en) 2013-03-14 2015-10-13 Microsoft Technology Licensing, Llc Signal analysis for repetition detection and analysis
US9298266B2 (en) 2013-04-02 2016-03-29 Aquifi, Inc. Systems and methods for implementing three-dimensional (3D) gesture based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
KR101436050B1 (en) * 2013-06-07 2014-09-02 한국과학기술연구원 Method of establishing database including hand shape depth images and method and device of recognizing hand shapes
US9144744B2 (en) 2013-06-10 2015-09-29 Microsoft Corporation Locating and orienting device in space
US9934451B2 (en) 2013-06-25 2018-04-03 Microsoft Technology Licensing, Llc Stereoscopic object detection leveraging assumed distance
US9208566B2 (en) * 2013-08-09 2015-12-08 Microsoft Technology Licensing, Llc Speckle sensing for motion tracking
KR20150028623A (en) 2013-09-06 2015-03-16 삼성전자주식회사 Method and apparatus for processing images
US9405375B2 (en) * 2013-09-13 2016-08-02 Qualcomm Incorporated Translation and scale invariant features for gesture recognition
TWI499966B (en) 2013-10-08 2015-09-11 Univ Nat Taiwan Science Tech Interactive operation method of electronic apparatus
RU2013148582A (en) * 2013-10-30 2015-05-10 ЭлЭсАй Корпорейшн Image processing processor containing a gesture recognition system with a computer-effective fixed hand position recognition
US20150139487A1 (en) * 2013-11-21 2015-05-21 Lsi Corporation Image processor with static pose recognition module utilizing segmented region of interest
US9507417B2 (en) 2014-01-07 2016-11-29 Aquifi, Inc. Systems and methods for implementing head tracking based graphical user interfaces (GUI) that incorporate gesture reactive interface objects
US9619105B1 (en) 2014-01-30 2017-04-11 Aquifi, Inc. Systems and methods for gesture based interaction with viewpoint dependent user interfaces
JP6165650B2 (en) * 2014-02-14 2017-07-19 株式会社ソニー・インタラクティブエンタテインメント Information processing apparatus and information processing method
WO2015126392A1 (en) 2014-02-20 2015-08-27 Hewlett-Packard Development Company, L.P. Emulating a user performing spatial gestures
US9436872B2 (en) 2014-02-24 2016-09-06 Hong Kong Applied Science and Technology Research Institute Company Limited System and method for detecting and tracking multiple parts of an object
US9618618B2 (en) 2014-03-10 2017-04-11 Elwha Llc Systems and methods for ultrasonic position and motion detection
RU2014113049A (en) * 2014-04-03 2015-10-10 ЭлЭсАй Корпорейшн Image processor containing a gesture recognition system with object tracking on the basis of computing signs of circuits for two or more objects
US9739883B2 (en) 2014-05-16 2017-08-22 Elwha Llc Systems and methods for ultrasonic velocity and acceleration detection
US9424490B2 (en) 2014-06-27 2016-08-23 Microsoft Technology Licensing, Llc System and method for classifying pixels
US20160014395A1 (en) * 2014-07-10 2016-01-14 Arete Associates Data fusion processing to identify obscured objects
US9811721B2 (en) 2014-08-15 2017-11-07 Apple Inc. Three-dimensional hand tracking using depth sequences
US9437002B2 (en) 2014-09-25 2016-09-06 Elwha Llc Systems and methods for a dual modality sensor system
US10062411B2 (en) 2014-12-11 2018-08-28 Jeffrey R. Hay Apparatus and method for visualizing periodic motions in mechanical components
US20160217588A1 (en) 2014-12-11 2016-07-28 Jeffrey R. Hay Method of Adaptive Array Comparison for the Detection and Characterization of Periodic Motion
CN104571510B (en) * 2014-12-30 2018-05-04 青岛歌尔声学科技有限公司 A kind of system and method that gesture is inputted in 3D scenes
US9344615B1 (en) 2015-01-26 2016-05-17 International Business Machines Corporation Discriminating visual recognition program for digital cameras
KR101683189B1 (en) * 2015-02-09 2016-12-06 선문대학교산학협력단 Paired-edge based hand detection method using depth image
US10254881B2 (en) 2015-06-29 2019-04-09 Qualcomm Incorporated Ultrasonic touch sensor-based virtual button
US10318798B2 (en) * 2015-07-10 2019-06-11 Booz Allen Hamilton Inc. Device and method for detecting non-visible content in a non-contact manner
KR101639066B1 (en) 2015-07-14 2016-07-13 한국과학기술연구원 Method and system for controlling virtual model formed in virtual space
US9995823B2 (en) 2015-07-31 2018-06-12 Elwha Llc Systems and methods for utilizing compressed sensing in an entertainment system
KR101745406B1 (en) * 2015-09-03 2017-06-12 한국과학기술연구원 Apparatus and method of hand gesture recognition based on depth image
US10120454B2 (en) * 2015-09-04 2018-11-06 Eyesight Mobile Technologies Ltd. Gesture recognition control device
US10048765B2 (en) 2015-09-25 2018-08-14 Apple Inc. Multi media computing or entertainment system for responding to user presence and activity
US10019072B2 (en) 2016-01-01 2018-07-10 International Business Machines Corporation Imagined grid fingertip input editor on wearable device
US10185400B2 (en) * 2016-01-11 2019-01-22 Antimatter Research, Inc. Gesture control device with fingertip identification
CN105903157B (en) * 2016-04-19 2018-08-10 深圳泰山体育科技股份有限公司 Electronic coach realization method and system
US10372228B2 (en) 2016-07-20 2019-08-06 Usens, Inc. Method and system for 3D hand skeleton tracking
JP6318202B2 (en) * 2016-08-18 2018-04-25 株式会社カプコン Game program and game system
US9817511B1 (en) 2016-09-16 2017-11-14 International Business Machines Corporation Reaching any touch screen portion with one hand
US20180214761A1 (en) * 2017-01-27 2018-08-02 The Johns Hopkins University Rehabilitation and training gaming system to promote cognitive-motor engagement description
US10353579B1 (en) * 2018-03-28 2019-07-16 Disney Enterprises, Inc. Interpreting user touch gestures to generate explicit instructions
CN108762505A (en) * 2018-05-29 2018-11-06 腾讯科技(深圳)有限公司 Virtual object control method, device, storage medium based on gesture and equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2107328C1 (en) * 1996-08-14 1998-03-20 Нурахмед Нурисламович Латыпов Method for tracing and displaying of position and orientation of user in three-dimensional space and device which implements said method
US5767842A (en) * 1992-02-07 1998-06-16 International Business Machines Corporation Method and device for optical input of commands or data
US20040193413A1 (en) * 2003-03-25 2004-09-30 Wilson Andrew D. Architecture for controlling a computer using hand gestures
US7042440B2 (en) * 1997-08-22 2006-05-09 Pryor Timothy R Man machine interfaces and applications
US20100302247A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Target digitization, extraction, and tracking

Family Cites Families (173)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4695953A (en) 1983-08-25 1987-09-22 Blair Preston E TV animation interactively controlled by the viewer
US4711543A (en) 1986-04-14 1987-12-08 Blair Preston E TV animation interactively controlled by the viewer
US4630910A (en) 1984-02-16 1986-12-23 Robotic Vision Systems, Inc. Method of measuring in three-dimensions at high speed
US4627620A (en) 1984-12-26 1986-12-09 Yang John P Electronic athlete trainer for improving skills in reflex, speed and accuracy
US4645458A (en) 1985-04-15 1987-02-24 Harald Phillip Athletic evaluation and training apparatus
US4702475A (en) 1985-08-16 1987-10-27 Innovating Training Products, Inc. Sports technique and reaction training system
US4843568A (en) 1986-04-11 1989-06-27 Krueger Myron W Real time perception of and response to the actions of an unencumbered participant/user
US4796997A (en) 1986-05-27 1989-01-10 Synthetic Vision Systems, Inc. Method and system for high-speed, 3-D imaging of an object at a vision station
US5184295A (en) 1986-05-30 1993-02-02 Mann Ralph V System and method for teaching physical skills
US4751642A (en) 1986-08-29 1988-06-14 Silva John M Interactive sports simulation system with physiological sensing and psychological conditioning
US4809065A (en) 1986-12-01 1989-02-28 Kabushiki Kaisha Toshiba Interactive system and related method for displaying data to produce a three-dimensional image of an object
US4817950A (en) 1987-05-08 1989-04-04 Goo Paul E Video game control unit and attitude sensor
US5239463A (en) 1988-08-04 1993-08-24 Blair Preston E Method and apparatus for player interaction with animated characters and objects
US5239464A (en) 1988-08-04 1993-08-24 Blair Preston E Interactive video system providing repeated switching of multiple tracks of actions sequences
US4901362A (en) 1988-08-08 1990-02-13 Raytheon Company Method of recognizing patterns
US4893183A (en) 1988-08-11 1990-01-09 Carnegie-Mellon University Robotic vision system
JPH02199526A (en) 1988-10-14 1990-08-07 David G Capper Control interface device
US5469740A (en) 1989-07-14 1995-11-28 Impulse Technology, Inc. Interactive video testing and training system
US4925189A (en) 1989-01-13 1990-05-15 Braeunig Thomas F Body-mounted video game exercise device
US5229756A (en) 1989-02-07 1993-07-20 Yamaha Corporation Image control apparatus
JPH03103822U (en) 1990-02-13 1991-10-29
US5101444A (en) 1990-05-18 1992-03-31 Panacea, Inc. Method and apparatus for high speed object location
US5148154A (en) 1990-12-04 1992-09-15 Sony Corporation Of America Multi-dimensional user interface
US5534917A (en) 1991-05-09 1996-07-09 Very Vivid, Inc. Video image based control system
JPH0519957A (en) * 1991-07-15 1993-01-29 Nippon Telegr & Teleph Corp <Ntt> Information inputting method
US5295491A (en) 1991-09-26 1994-03-22 Sam Technology, Inc. Non-invasive human neurocognitive performance capability testing method and system
US6054991A (en) 1991-12-02 2000-04-25 Texas Instruments Incorporated Method of modeling player position and movement in a virtual reality system
WO1993010708A1 (en) 1991-12-03 1993-06-10 French Sportech Corporation Interactive video testing and training system
US5875108A (en) 1991-12-23 1999-02-23 Hoffberg; Steven M. Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
US5417210A (en) 1992-05-27 1995-05-23 International Business Machines Corporation System and method for augmentation of endoscopic surgery
JPH07325934A (en) 1992-07-10 1995-12-12 Walt Disney Co:The Method and device for supplying improved graphics to virtual word
US5999908A (en) 1992-08-06 1999-12-07 Abelow; Daniel H. Customer-based product design module
US5320538A (en) 1992-09-23 1994-06-14 Hughes Training, Inc. Interactive aircraft training system and method
IT1257294B (en) 1992-11-20 1996-01-12 A device adapted to detect the configuration of a unit 'fisiologicadistale, to be used in particular as an advanced interface for machines and computers.
US5495576A (en) 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
US5690582A (en) 1993-02-02 1997-11-25 Tectrix Fitness Equipment, Inc. Interactive exercise apparatus
JP2799126B2 (en) 1993-03-26 1998-09-17 株式会社ナムコ Video game apparatus and a game input device
US5405152A (en) 1993-06-08 1995-04-11 The Walt Disney Company Method and apparatus for an interactive video game with physical feedback
US5454043A (en) 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5423554A (en) 1993-09-24 1995-06-13 Metamedia Ventures, Inc. Virtual reality game method and apparatus
US5980256A (en) 1993-10-29 1999-11-09 Carmein; David E. E. Virtual reality system with enhanced sensory apparatus
JP3419050B2 (en) 1993-11-19 2003-06-23 株式会社日立製作所 Input device
US5347306A (en) 1993-12-17 1994-09-13 Mitsubishi Electric Research Laboratories, Inc. Animated electronic meeting place
JP2552427B2 (en) 1993-12-28 1996-11-13 コナミ株式会社 TV game system
US5577981A (en) 1994-01-19 1996-11-26 Jarvik; Robert Virtual reality exercise machine and computer controlled video system
US5580249A (en) 1994-02-14 1996-12-03 Sarcos Group Apparatus for simulating mobility of a human
US5597309A (en) 1994-03-28 1997-01-28 Riess; Thomas Method and apparatus for treatment of gait problems associated with parkinson's disease
US5385519A (en) 1994-04-19 1995-01-31 Hsu; Chi-Hsueh Running machine
US5524637A (en) 1994-06-29 1996-06-11 Erickson; Jon W. Interactive system for measuring physiological exertion
JPH0844490A (en) 1994-07-28 1996-02-16 Matsushita Electric Ind Co Ltd Interface device
US5563988A (en) 1994-08-01 1996-10-08 Massachusetts Institute Of Technology Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment
US6714665B1 (en) 1994-09-02 2004-03-30 Sarnoff Corporation Fully automated iris recognition system utilizing wide and narrow fields of view
US5516105A (en) 1994-10-06 1996-05-14 Exergame, Inc. Acceleration activated joystick
US5638300A (en) 1994-12-05 1997-06-10 Johnson; Lee E. Golf swing analysis system
JPH08161292A (en) 1994-12-09 1996-06-21 Matsushita Electric Ind Co Ltd Method and system for detecting congestion degree
US5594469A (en) 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
US5682229A (en) 1995-04-14 1997-10-28 Schwartz Electro-Optics, Inc. Laser range camera
US5913727A (en) 1995-06-02 1999-06-22 Ahdoot; Ned Interactive movement and contact simulation game
WO1996041304A1 (en) 1995-06-07 1996-12-19 The Trustees Of Columbia University In The City Of New York Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two images due to defocus
US5682196A (en) 1995-06-22 1997-10-28 Actv, Inc. Three-dimensional (3D) video presentation system providing interactive 3D presentation with personalized audio responses for multiple viewers
US5702323A (en) 1995-07-26 1997-12-30 Poulton; Craig K. Electronic exercise enhancer
US6430997B1 (en) 1995-11-06 2002-08-13 Trazer Technologies, Inc. System and method for tracking and assessing movement skills in multidimensional space
US6098458A (en) 1995-11-06 2000-08-08 Impulse Technology, Ltd. Testing and training system for assessing movement and agility skills without a confining field
US6308565B1 (en) 1995-11-06 2001-10-30 Impulse Technology Ltd. System and method for tracking and assessing movement skills in multidimensional space
WO1999044698A2 (en) 1998-03-03 1999-09-10 Arena, Inc. System and method for tracking and assessing movement skills in multidimensional space
US6073489A (en) 1995-11-06 2000-06-13 French; Barry J. Testing and training system for assessing the ability of a player to complete a task
US5933125A (en) 1995-11-27 1999-08-03 Cae Electronics, Ltd. Method and apparatus for reducing instability in the display of a virtual environment
US5641288A (en) 1996-01-11 1997-06-24 Zaenglein, Jr.; William G. Shooting simulating process and training device using a virtual reality display screen
AU3283497A (en) 1996-05-08 1997-11-26 Real Vision Corporation Real time simulation using position sensing
US6173066B1 (en) 1996-05-21 2001-01-09 Cybernet Systems Corporation Pose determination and tracking by matching 3D objects to a 2D sensor
US5989157A (en) 1996-08-06 1999-11-23 Walton; Charles A. Exercising system with electronic inertial game playing
AU3954997A (en) 1996-08-14 1998-03-06 Nurakhmed Nurislamovich Latypov Method for following and imaging a subject's three-dimensional position and orientation, method for presenting a virtual space to a subject, and systems for implementing said methods
JP3064928B2 (en) 1996-09-20 2000-07-12 日本電気株式会社 Subject extraction method
EP0849697B1 (en) 1996-12-20 2003-02-12 Hitachi Europe Limited A hand gesture recognition system and method
US6009210A (en) 1997-03-05 1999-12-28 Digital Equipment Corporation Hands-free interface to a virtual reality environment using head tracking
US6100896A (en) 1997-03-24 2000-08-08 Mitsubishi Electric Information Technology Center America, Inc. System for designing graphical multi-participant environments
US5877803A (en) 1997-04-07 1999-03-02 Tritech Mircoelectronics International, Ltd. 3-D image detector
US6215898B1 (en) 1997-04-15 2001-04-10 Interval Research Corporation Data processing system and method
JP3077745B2 (en) 1997-07-31 2000-08-14 日本電気株式会社 Data processing method and apparatus, the information storage medium
US6188777B1 (en) 1997-08-01 2001-02-13 Interval Research Corporation Method and apparatus for personnel detection and tracking
US6289112B1 (en) 1997-08-22 2001-09-11 International Business Machines Corporation System and method for determining block direction in fingerprint images
AUPO894497A0 (en) 1997-09-02 1997-09-25 Xenotech Research Pty Ltd Image processing method and apparatus
EP0905644A3 (en) 1997-09-26 2004-02-25 Communications Research Laboratory, Ministry of Posts and Telecommunications Hand gesture recognizing device
US6141463A (en) 1997-10-10 2000-10-31 Electric Planet Interactive Method and system for estimating jointed-figure configurations
US6130677A (en) 1997-10-15 2000-10-10 Electric Planet, Inc. Interactive computer vision system
US6101289A (en) 1997-10-15 2000-08-08 Electric Planet, Inc. Method and apparatus for unencumbered capture of an object
US6072494A (en) 1997-10-15 2000-06-06 Electric Planet, Inc. Method and apparatus for real-time gesture recognition
US6384819B1 (en) 1997-10-15 2002-05-07 Electric Planet, Inc. System and method for generating an animatable character
AU1099899A (en) 1997-10-15 1999-05-03 Electric Planet, Inc. Method and apparatus for performing a clean background subtraction
US6176782B1 (en) 1997-12-22 2001-01-23 Philips Electronics North America Corp. Motion-based command generation technology
US6181343B1 (en) 1997-12-23 2001-01-30 Philips Electronics North America Corp. System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
US6159100A (en) 1998-04-23 2000-12-12 Smith; Michael D. Virtual reality game
US6077201A (en) 1998-06-12 2000-06-20 Cheng; Chau-Yang Exercise bicycle
US6681031B2 (en) 1998-08-10 2004-01-20 Cybernet Systems Corporation Gesture-controlled interfaces for self-service machines and other applications
US6950534B2 (en) 1998-08-10 2005-09-27 Cybernet Systems Corporation Gesture-controlled interfaces for self-service machines and other applications
US6801637B2 (en) 1999-08-10 2004-10-05 Cybernet Systems Corporation Optical body tracker
US20010008561A1 (en) 1999-08-10 2001-07-19 Paul George V. Real-time object tracking system
US7121946B2 (en) 1998-08-10 2006-10-17 Cybernet Systems Corporation Real-time head tracking system for computer games and other applications
US7036094B1 (en) 1998-08-10 2006-04-25 Cybernet Systems Corporation Behavior recognition system
US7050606B2 (en) 1999-08-10 2006-05-23 Cybernet Systems Corporation Tracking and gesture recognition system particularly suited to vehicular control applications
IL126284A (en) 1998-09-17 2002-12-01 Netmor Ltd System and method for three dimensional positioning and tracking
DE69936620T2 (en) 1998-09-28 2008-05-21 Matsushita Electric Industrial Co., Ltd., Kadoma Method and device for segmenting hand gestures
WO2000034919A1 (en) 1998-12-04 2000-06-15 Interval Research Corporation Background estimation and segmentation based on range and color
US6147678A (en) 1998-12-09 2000-11-14 Lucent Technologies Inc. Video hand image-three-dimensional computer interface with multiple degrees of freedom
AU1574899A (en) 1998-12-16 2000-07-03 3Dv Systems Ltd. Self gating photosurface
US6570555B1 (en) 1998-12-30 2003-05-27 Fuji Xerox Co., Ltd. Method and apparatus for embodied conversational characters with multimodal input/output in an interface device
US6363160B1 (en) 1999-01-22 2002-03-26 Intel Corporation Interface using pattern recognition and tracking
US7003134B1 (en) 1999-03-08 2006-02-21 Vulcan Patents Llc Three dimensional object pose estimation which employs dense depth information
US6299308B1 (en) 1999-04-02 2001-10-09 Cybernet Systems Corporation Low-cost non-imaging eye tracker system for computer control
US6503195B1 (en) 1999-05-24 2003-01-07 University Of North Carolina At Chapel Hill Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
US6476834B1 (en) 1999-05-28 2002-11-05 International Business Machines Corporation Dynamic creation of selectable items on surfaces
US6873723B1 (en) 1999-06-30 2005-03-29 Intel Corporation Segmenting three-dimensional video images using stereo
US6738066B1 (en) 1999-07-30 2004-05-18 Electric Plant, Inc. System, method and article of manufacture for detecting collisions between video images generated by a camera and an object depicted on a display
US7113918B1 (en) 1999-08-01 2006-09-26 Electric Planet, Inc. Method for video enabled electronic commerce
US6663491B2 (en) 2000-02-18 2003-12-16 Namco Ltd. Game apparatus, storage medium and computer program that adjust tempo of sound
US6633294B1 (en) 2000-03-09 2003-10-14 Seth Rosenthal Method and apparatus for using captured high density motion for animation
US6624833B1 (en) * 2000-04-17 2003-09-23 Lucent Technologies Inc. Gesture-based input interface system with shadow detection
EP1152261A1 (en) 2000-04-28 2001-11-07 CSEM Centre Suisse d'Electronique et de Microtechnique SA Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves
US6640202B1 (en) 2000-05-25 2003-10-28 International Business Machines Corporation Elastic sensor mesh system for 3-dimensional measurement, mapping and kinematics applications
US6731799B1 (en) 2000-06-01 2004-05-04 University Of Washington Object segmentation with background extraction and moving boundary techniques
US6788809B1 (en) 2000-06-30 2004-09-07 Intel Corporation System and method for gesture recognition in three dimensions using stereo imaging and color vision
US7227526B2 (en) 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US7058204B2 (en) 2000-10-03 2006-06-06 Gesturetek, Inc. Multiple camera control system
US7039676B1 (en) 2000-10-31 2006-05-02 International Business Machines Corporation Using video image analysis to automatically transmit gestures over a network in a chat or instant messaging session
US6539931B2 (en) 2001-04-16 2003-04-01 Koninklijke Philips Electronics N.V. Ball throwing assistant
US7259747B2 (en) 2001-06-05 2007-08-21 Reactrix Systems, Inc. Interactive video display system
US7348963B2 (en) 2002-05-28 2008-03-25 Reactrix Systems, Inc. Interactive video display system
US8035612B2 (en) 2002-05-28 2011-10-11 Intellectual Ventures Holding 67 Llc Self-contained interactive video display system
US7170492B2 (en) 2002-05-28 2007-01-30 Reactrix Systems, Inc. Interactive video display system
US7710391B2 (en) 2002-05-28 2010-05-04 Matthew Bell Processing an image utilizing a spatially varying pattern
JP3420221B2 (en) 2001-06-29 2003-06-23 株式会社コナミコンピュータエンタテインメント東京 Game apparatus and program
US7274800B2 (en) 2001-07-18 2007-09-25 Intel Corporation Dynamic gesture recognition from stereo sequences
US6937742B2 (en) 2001-09-28 2005-08-30 Bellsouth Intellectual Property Corporation Gesture activated home appliance
US7340077B2 (en) 2002-02-15 2008-03-04 Canesta, Inc. Gesture recognition system using depth perceptive sensors
AT321689T (en) 2002-04-19 2006-04-15 Iee Sarl Safety device for a vehicle
US7489812B2 (en) 2002-06-07 2009-02-10 Dynamic Digital Depth Research Pty Ltd. Conversion and encoding techniques
US7576727B2 (en) 2002-12-13 2009-08-18 Matthew Bell Interactive directed light/sound system
JP4235729B2 (en) 2003-02-03 2009-03-11 国立大学法人静岡大学 Distance image sensor
EP1477924B1 (en) 2003-03-31 2007-05-02 HONDA MOTOR CO., Ltd. Gesture recognition apparatus, method and program
US8072470B2 (en) 2003-05-29 2011-12-06 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
WO2004107266A1 (en) 2003-05-29 2004-12-09 Honda Motor Co., Ltd. Visual tracking using depth data
EP3190546A3 (en) 2003-06-12 2017-10-04 Honda Motor Co., Ltd. Target orientation estimation using depth sensing
WO2005041579A2 (en) 2003-10-24 2005-05-06 Reactrix Systems, Inc. Method and system for processing captured image information in an interactive video display system
JP4708422B2 (en) 2004-04-15 2011-06-22 ジェスチャー テック,インコーポレイテッド Tracking of two-hand movement
US7308112B2 (en) 2004-05-14 2007-12-11 Honda Motor Co., Ltd. Sign based human-machine interaction
US7704135B2 (en) 2004-08-23 2010-04-27 Harrison Jr Shelton E Integrated game system, method, and device
KR20060070280A (en) 2004-12-20 2006-06-23 한국전자통신연구원 Apparatus and its method of user interface using hand gesture recognition
CN101622630B (en) 2005-01-07 2012-07-04 高通股份有限公司 Detecting and tracking objects in images
JP2008537190A (en) 2005-01-07 2008-09-11 ジェスチャー テック,インコーポレイテッド Generation of three-dimensional image of object by irradiating with infrared pattern
EP1849123A2 (en) 2005-01-07 2007-10-31 GestureTek, Inc. Optical flow based tilt sensor
KR100960577B1 (en) 2005-02-08 2010-06-03 오블롱 인더스트리즈, 인크 System and method for gesture based control system
WO2006099597A2 (en) 2005-03-17 2006-09-21 Honda Motor Co., Ltd. Pose estimation based on critical point analysis
WO2006124935A2 (en) 2005-05-17 2006-11-23 Gesturetek, Inc. Orientation-sensitive signal output
EP1752748B1 (en) 2005-08-12 2008-10-29 MESA Imaging AG Highly sensitive, fast pixel for use in an image sensor
US20080026838A1 (en) 2005-08-22 2008-01-31 Dunstan James E Multi-player non-role-playing virtual world games: method for two-way interaction between participants and multi-player virtual world games
US7450736B2 (en) 2005-10-28 2008-11-11 Honda Motor Co., Ltd. Monocular tracking of 3D human motion with a coordinated mixture of factor analyzers
JP2007134913A (en) * 2005-11-09 2007-05-31 Matsushita Electric Ind Co Ltd Method and device for selecting image
US7701439B2 (en) 2006-07-13 2010-04-20 Northrop Grumman Corporation Gesture recognition simulation system and method
JP5395323B2 (en) 2006-09-29 2014-01-22 ブレインビジョン株式会社 Solid-state image sensor
US7412077B2 (en) 2006-12-29 2008-08-12 Motorola, Inc. Apparatus and methods for head pose estimation and head gesture detection
US8144148B2 (en) 2007-02-08 2012-03-27 Edge 3 Technologies Llc Method and system for vision-based interaction in a virtual environment
US7729530B2 (en) 2007-03-03 2010-06-01 Sergey Antonov Method and apparatus for 3-D data input to a personal computer with a multimedia oriented operating system
TW200907764A (en) 2007-08-01 2009-02-16 Unique Instr Co Ltd Three-dimensional virtual input and simulation apparatus
US7852262B2 (en) 2007-08-16 2010-12-14 Cybernet Systems Corporation Wireless mobile indoor/outdoor tracking system
US7970176B2 (en) 2007-10-02 2011-06-28 Omek Interactive, Inc. Method and system for gesture classification
CN101254344B (en) 2008-04-18 2010-06-16 李刚 Game device of field orientation corresponding with display screen dot array in proportion and method
KR100931403B1 (en) * 2008-06-25 2009-12-11 한국과학기술연구원 Device and information controlling system on network using hand gestures
JP2010072840A (en) * 2008-09-17 2010-04-02 Denso Corp Image display method, image display device, and operation device using the same
US8379987B2 (en) * 2008-12-30 2013-02-19 Nokia Corporation Method, apparatus and computer program product for providing hand segmentation for gesture analysis
JP5177075B2 (en) 2009-02-12 2013-04-03 ソニー株式会社 Motion recognition device, motion recognition method, and program
US8744121B2 (en) * 2009-05-29 2014-06-03 Microsoft Corporation Device for identifying and tracking multiple humans over time
US20110025689A1 (en) * 2009-07-29 2011-02-03 Microsoft Corporation Auto-Generating A Visual Representation
US8843857B2 (en) 2009-11-19 2014-09-23 Microsoft Corporation Distance scalable no touch computing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5767842A (en) * 1992-02-07 1998-06-16 International Business Machines Corporation Method and device for optical input of commands or data
RU2107328C1 (en) * 1996-08-14 1998-03-20 Нурахмед Нурисламович Латыпов Method for tracing and displaying of position and orientation of user in three-dimensional space and device which implements said method
US7042440B2 (en) * 1997-08-22 2006-05-09 Pryor Timothy R Man machine interfaces and applications
US20040193413A1 (en) * 2003-03-25 2004-09-30 Wilson Andrew D. Architecture for controlling a computer using hand gestures
US20100302247A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Target digitization, extraction, and tracking

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
RU2662399C1 (en) * 2017-03-17 2018-07-25 Алексей Александрович Тарасов System and method for capturing movements and positions of human body and parts of human body

Also Published As

Publication number Publication date
JP6021901B2 (en) 2016-11-09
CA2837470A1 (en) 2012-12-13
WO2012170349A3 (en) 2013-01-31
US20120309532A1 (en) 2012-12-06
JP2014524070A (en) 2014-09-18
CA2837470C (en) 2019-09-03
WO2012170349A2 (en) 2012-12-13
RU2013154102A (en) 2015-06-10
US8897491B2 (en) 2014-11-25
EP2718900A2 (en) 2014-04-16
AU2012268589B2 (en) 2016-12-15
MX2013014393A (en) 2014-03-21
KR101956325B1 (en) 2019-03-08
KR20140024421A (en) 2014-02-28
EP2718900A4 (en) 2015-02-18

Similar Documents

Publication Publication Date Title
JP6074170B2 (en) Short range motion tracking system and method
US9465980B2 (en) Pose tracking pipeline
KR101700468B1 (en) Bringing a visual representation to life via learned input from the user
US8320621B2 (en) Depth projector system with integrated VCSEL array
EP2435892B1 (en) Gesture shortcuts
US8437506B2 (en) System for fast, probabilistic skeletal tracking
CN102332090B (en) Compartmentalizing focus area within field of view
US8578302B2 (en) Predictive determination
US8448056B2 (en) Validation analysis of human target
US9478057B2 (en) Chaining animations
CN102289815B (en) Detecting motion for a multifunction sensor device
RU2560794C2 (en) Visual representation expression based on player expression
CN102448566B (en) Gestures beyond skeletal
US8558873B2 (en) Use of wavefront coding to create a depth image
CN102301315B (en) Gesture recognizer system architecture
US9377857B2 (en) Show body position
CN102194105B (en) Proxy training data for human body tracking
US8351651B2 (en) Hand-location post-process refinement in a tracking system
US8499257B2 (en) Handles interactions for human—computer interface
US8760395B2 (en) Gesture recognition techniques
US8659658B2 (en) Physical interaction zone for gesture-based user interfaces
US8166421B2 (en) Three-dimensional user interface
CN102470274B (en) Automatic generation of visual representation
US10048763B2 (en) Distance scalable no touch computing
KR101679442B1 (en) Standard gestures