CN102448561A - Gesture coach - Google Patents

Gesture coach Download PDF

Info

Publication number
CN102448561A
CN102448561A CN2010800246590A CN201080024659A CN102448561A CN 102448561 A CN102448561 A CN 102448561A CN 2010800246590 A CN2010800246590 A CN 2010800246590A CN 201080024659 A CN201080024659 A CN 201080024659A CN 102448561 A CN102448561 A CN 102448561A
Authority
CN
China
Prior art keywords
user
posture
data
output
help
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2010800246590A
Other languages
Chinese (zh)
Other versions
CN102448561B (en
Inventor
G·N·斯努克
S·拉塔
K·盖斯那
D·A·贝内特
K·兹努达
A·基普曼
K·S·佩雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corp filed Critical Microsoft Corp
Publication of CN102448561A publication Critical patent/CN102448561A/en
Application granted granted Critical
Publication of CN102448561B publication Critical patent/CN102448561B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B19/00Teaching not covered by other main groups of this subclass
    • G09B19/003Repetitive work cycles; Sequence of movements
    • G09B19/0038Sports
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Business, Economics & Management (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Multimedia (AREA)
  • Psychiatry (AREA)
  • Biophysics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Epidemiology (AREA)
  • Social Psychology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Educational Administration (AREA)
  • Educational Technology (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.

Description

The posture coach
Background technology
Use control to allow other aspects of user's direct game role or application such as many computing applications such as computer game, multimedia application, office application.Usually use that for example controller, remote controller, keyboard, mouse or the like are imported such control.Unfortunately, these controls possibly be difficult to learn, and have caused the obstacle between user and these recreation and the application thus.In addition, these controls maybe be different with actual play action or other using actions that these control is used for.For example, make game role brandish game control that baseball claps maybe be with to brandish the actual motion that baseball claps not corresponding.
Summary of the invention
In some system, display device can show the model that is mapped to the user movement of being caught by system.For example, this model can be shown as the incarnation on the screen, and wherein the motion of this incarnation can be through coming controlled to the motion of user in physical space the Motion mapping of this incarnation in the application space.The user possibly be unfamiliar with the system that user's motion is shone upon.For example, the user possibly not know that what posture is applicable to the application of carrying out.In some cases, the user does not understand or does not know how to carry out the posture that is applicable to the application of carrying out.Hand-written or picture description in the handbook possibly be not enough to instruct the user how correctly to make posture.
The system and method that is used for postural training disclosed herein.When the user attempts to make posture, can analyze the user data of being caught and from the output of the corresponding posture filter of this user data of catching, attempting but fail to carry out posture to confirm the user, and to offer help to the user be suitable.This help can comprise that the instruction user carries out the correct method of this posture.For example; Output to filter comprises the level of confidence of carrying out corresponding posture; And when this level of confidence is lower than recognition threshold, can confirm that this help is in the level of confidence of correspondence for the instruction user or the mode that surpasses this recognition threshold is carried out posture and is fit to.
Content of the present invention is provided so that be presented in some notions that further describe in the following specific embodiment with reduced form.Content of the present invention is not intended to identify the key feature or the essential feature of the protection theme that requires, and is not intended to be used to limit the scope of the protection theme that requires yet.In addition, theme required for protection is not limited to solve the realization of any or all shortcoming of in arbitrary part of the present invention, mentioning.
Description of drawings
Further describe the system that is used for postural training, method and computer-readable medium with reference to accompanying drawing according to this specification:
Figure 1A and 1B show the example embodiment of Target Recognition, analysis and the tracking system of playing games about the user.
Fig. 2 shows the example embodiment that can in Target Recognition, analysis and tracking system, use and comprise the capture device of postural training.
Fig. 3 shows another example embodiment of the computing environment that wherein can realize the technology that is used for postural training disclosed herein.
Fig. 4 shows another example embodiment of the computing environment that wherein can realize the technology that is used for postural training disclosed herein.
Fig. 5 A shows the skeleton mapping from the user of depth image generation.
Fig. 5 B shows the further details of gesture recognizers architecture shown in Figure 2.
Fig. 6 A-6D illustrates another example embodiment that user is wherein playing Target Recognition, analysis and the tracking of boxing game.
Fig. 7 A-7D illustrates the example that the visual representation vision side by side with user's posture helps and shows.
Fig. 8 A illustrates the example that the vision on the visual representation of the posture that is superimposed upon the user helps and shows.
The example that Fig. 8 B illustrates the vision help of the demonstration that comprises posture shows.
Fig. 9 illustrates and is used to get into the example demonstration of exercise mode with the option of reception vision help.
Figure 10 illustrates through the network connection and carries out mutual long-distance user, and the vision of one of them user's motion helps to be provided for second user.
Figure 11 A describes to be used for the exemplary operations process of postural training.
Figure 11 B and 11C have described the exemplary architecture that is used for postural training with gesture recognizers engine and application integration.
Figure 12 A and 12B have described example filter output, can confirm that from it postural training is suitable.
The specific embodiment
Like what will describe among this paper, the user can be controlled at the application of carrying out such as on the computing environment such as game console, computer through carrying out one or more postures.Herein disclosed is and be used for the demonstrate system and method for motion of required posture to the user.For example, this computing environment can be provided for training the user how to make the vision help of the appropriate exercise that is applicable to the application of carrying out.
In order to generate target or the model of object of expression in the physical space, capture device can be caught the depth image of this physical space and scanned each target in this scene.Target can comprise human or other objects in this scene.In one embodiment, capture device can confirm that one or more targets in the scene are whether corresponding to such as people's class targets such as users.In order whether to confirm target in the scene, can carry out that film color is filled and the pattern of itself and manikin is made comparisons each target corresponding to people's class targets.Can scan the target that is identified as the mankind and generate the skeleton pattern that is associated with it.Can this skeleton pattern be offered computing environment then follows the tracks of this skeleton pattern and presents the incarnation that is associated with this skeleton pattern.This computing environment can with the Motion mapping of user in physical space to the display device such as visual representations such as incarnation.To carry out which control command in the application that this computing environment can be confirmed on computer environment, to carry out based on the user's who for example identifies and be mapped to skeleton pattern posture.Therefore, can explicit user help, as via the incarnation on the screen, and the user can control the motion of this incarnation and carry out the control to operating system or the application carried out, for example through in physical space, making each posture.
In some cases, being provided for instructing the user how correctly to make posture, to control that the vision of the application of carrying out helps be desirable.For example, the user possibly not know and is applicable to the corresponding motion of given pose of the application of carrying out or do not know how to carry out this motion.This system can detect the mistake in user's the posture, correctly makes posture thereby indicate this user to practise.
In the functional unit of describing in this specification some are marked as module more specifically to stress their realization independence.For example, module can be implemented as hardware circuit, and this hardware circuit comprises self-defined VLSI circuit or gate array, such as stock semiconductors such as logic chip, transistors, or other discrete assemblies.The also available programmable hardware device of module realizes, like field programmable gate array, programmable logic array, PLD etc.
Module also can use the software of being carried out by various types of processors to realize.The module of the executable code that is identified for example comprises the one or more physics or the logical block of computer instruction, and this computer instruction for example can be organized into target, process or function.Yet the executable code of the module that is identified needn't be physically located at a place, but can comprise the different instruction that is stored in diverse location, the regulation purpose of when these instruct gang logically, forming this module and realizing this module.
System, method and the assembly that provides vision to practise help described herein can be implemented in the multimedia console such as game console; Or can in any other computing equipment that needs provide vision to help, implement; As an example but unexpected restriction, these other computing equipments comprise DVB, STB, electronic game machine, personal computer (PC), portable phone, PDA(Personal Digital Assistant) and other portable equipments.
Figure 1A and 1B illustrate the example embodiment of the configuration of Target Recognition, analysis and the tracking system 10 of following user 18 to play boxing game.In an example embodiment, system 10 can discern, analyzes and/or follow the tracks of the people's class targets such as user 18.System 10 can collect and the relevant information of the posture of user in physical space.
System 10 can provide the vision of the required posture of demonstration to help to the user.Can trigger with multiple mode provides what vision helped.For example, this system can detect wrong in the user movement or with the deviation of desired movement.Detect the vision help that such mistake or deviation can trigger the required posture of demonstration.In another example, the application of carrying out can be provided for practising vision purpose, the suitable controlled motion of demonstration and help.Help can be taked various forms, like sense of touch, the sense of hearing and vision.In one embodiment, help to comprise that audio frequency helps, vision helps, change the tracking pattern of certain combination of help of flash of light, these forms of the fading of lining, display element, the display element of color, the display element of display element.
Shown in Figure 1A, Target Recognition, analysis and tracking system 10 can comprise computing environment 12.Computing environment 12 can be computer, games system or console or the like.According to an example embodiment, computing environment 12 can comprise nextport hardware component NextPort and/or component software, makes that computing environment 12 can be used for carrying out such as application such as games application, non-games application.
Shown in Figure 1A, Target Recognition, analysis and tracking system 10 also can comprise capture device 20.Capture device 20 can be a camera for example; This camera is used in visually to be kept watch on such as one or more users such as users 18; So that can catch, analyze and follow the tracks of the performed posture of one or more users with one or more controls or action in the execution application, as will be described in greater detail below.
According to an embodiment, Target Recognition, analysis and tracking system 10 can be connected to can be to the audio-visual equipment 16 that recreation or application vision and/or audio frequency are provided such as users such as users 18, like television set, monitor, HDTV (HDTV) etc.For example, computing environment 12 can comprise that these adapters can provide the audio visual signal that is associated with games application, non-games application etc. such as video adapters such as graphics cards and/or such as audio frequency adapters such as sound cards.Audio-visual equipment 16 can be exported the recreation that is associated with this audio visual signal or use vision and/or audio frequency from computing environment 12 receiving said audiovisual signals then to user 18.According to an embodiment, audio-visual equipment 16 can be via for example, and S-vision cable, coaxial cable, HDMI cable, DVI cable, VGA cable etc. are connected to computing environment 12.
Shown in Figure 1A and 1B, Target Recognition, analysis and tracking system 10 can be used for discerning, analyzing and/or follow the tracks of the people's class targets such as user 18 etc.For example, can use capture device 20 to follow the tracks of user 18, so that can mobile being interpreted as of the user 18 be can be used for influencing the control by the application of computer environment 12 execution.Thereby according to an embodiment, user's 18 removable his or her healths are controlled application.
Shown in Figure 1A and 1B, in an example embodiment, the application of on computing environment 12, carrying out can be the boxing game that user 18 possibly play.For example, computing environment 12 can use audio-visual equipment 16 to come to provide to user 18 sparring partner 38 visual representation.Computing environment 12 also can use audio-visual equipment 16 to provide the user 18 can be through his or the visual representation of his moves player's incarnation 40 of controlling.For example, shown in Figure 1B, user 18 can shake one's fists in physical space and make player's incarnation 40 in gamespace, shake one's fists.Therefore; According to an example embodiment; The computing environment 12 of Target Recognition, analysis and tracking system 10 and capture device 20 can be used for discerning and analysis user 18 goes out fist in physical space, can be interpreted as the game control to the player's incarnation 40 in the gamespace thereby make this go out fist.
Other of user 18 move also can be interpreted as other controls or action, such as swing fast up and down, dodge, sliding steps, lattice retaining, straight punch or brandish the controls such as fist of various different dynamics.In addition, some moves that can be interpreted as can be corresponding to the control of the action except that control player incarnation 40.For example, the player can use to move and finish, suspends or preserve recreation, select rank, check high score, exchange with friend etc.
As described in greater detail below, system 10 can help to user 18 be provided for the demonstrating vision of the posture that is applicable to the application of carrying out.In an example embodiment, it is the pre-recorded content of forms such as skeleton representation, ghost image or player's incarnation that vision helps.In another example embodiment, can present live content to the user.
In each example embodiment, can hold an object such as user's 18 class targets such as people such as grade.In these embodiment, thereby the hand-holdable object of the user of electronic game can use the motion of player and object to adjust and/or control the parameter of recreation.For example, can follow the tracks of and utilize the motion of the hand-held racket of player to control racket on the screen in the electron motion game.In another example embodiment, can follow the tracks of and utilize the motion of the hand-held object of player to control weapon on the screen in the electronics FTG.System 10 can be provided for the demonstrating vision of the posture relevant with respect to the motion in physical space and/or in the application space of object that this player holds with the user helps.
According to other example embodiment, Target Recognition, analysis and tracking system 10 also can be used for target is moved operating system and/or the application controls that is interpreted as outside the field of play.For example, in fact any controlled aspect of operating system and/or application can be by controlling such as moving of target such as users 18.And system 10 can be provided for demonstrating and help with the vision of any can the control aspect relevant posture of operating system and/or application.
Fig. 2 illustrates the example embodiment of the capture device 20 that can in Target Recognition, analysis and tracking system 10, use.According to an example embodiment, capture device 20 can be configured to via any suitable technique, comprises that for example flight time, structured light, stereo-picture wait to catch the video that has depth information that comprises depth image, and this depth information can comprise depth value.According to an embodiment, capture device 20 can be organized as the depth information that is calculated " Z layer ", or the layer vertical with the Z axle that extends along its sight line from degree of depth camera.
As shown in Figure 2, capture device 20 can comprise image camera assembly 22.According to an example embodiment, image camera assembly 22 can be that the degree of depth camera that can catch the depth image of scene maybe can be caught the RGB camera 28 from the color of this scene.Depth image can comprise two dimension (2-D) pixel region of the scene that is captured, and wherein each pixel in the 2-D pixel region can (being unit with centimetre, millimeter or the like for example) be represented the length from the object in the scene that is captured of camera.
As shown in Figure 2, according to an example embodiment, image camera assembly 22 can comprise the IR optical assembly 24 of the depth image that can be used for catching scene, three-dimensional (3-D) camera 26 and RGB camera 28.For example; In ToF analysis; The IR optical assembly 24 of capture device 20 can be transmitted into infrared light on the scene, then, can use the sensor (not shown); With for example three-dimensional camera 26 and/or RGB camera 28, detect one or more targets and the backscattered light of object surfaces from scene.In certain embodiments, can use pulsed infrared light, make and to measure the time difference between outgoing light pulse and the corresponding incident light pulse and to use it for target or the physical distance of the ad-hoc location on the object confirming from capture device 20 to scene.In addition, in other exemplary embodiments, can the phase place of outgoing light wave and the phase place of incident light wave be compared to confirm phase shift.Can use this phase in-migration to confirm the physical distance of the ad-hoc location from the capture device to the target or on the object then.
According to another example embodiment; Can use ToF analysis, through via for example comprising that intensity that the various technology of fast gate-type light pulse in being imaged on to analyze in time folded light beam is to confirm from capture device 20 to target or the physical distance of the ad-hoc location on the object indirectly.
In another example embodiment, but capture device 20 utilization structure light are caught depth information.In this was analyzed, patterning light (that is, being shown as the light of the known pattern of lattice for example or candy strip) can be projected on the scene via for example IR optical assembly 24.In the time of on one or more targets in falling scene or the object surfaces, as response, the pattern deformable.This distortion of pattern can be caught by for example 3-D camera 26 and/or RGB camera 28, can be analyzed to confirm the physical distance of the ad-hoc location from the capture device to the target or on the object then.
According to another embodiment, capture device 20 can comprise the camera that two or more physically separate, and these cameras can check from different perspectives that scene obtains to be resolved to generate the vision stereo data of depth information.
Capture device 20 also can comprise microphone 30.Microphone 30 can comprise the transducer or the sensor that can receive sound and convert thereof into the signal of telecommunication.According to an embodiment, microphone 30 can be used to reduce capture device 20 and the help between the computing environment 12 in Target Recognition, analysis and the tracking system 10.In addition, microphone 30 can be used for receiving also can customer-furnished audio signal, with control can by computing environment 12 carry out such as application such as games application, non-games application.
In example embodiment, capture device 20 can also comprise and can carry out the exercisable processor of communicating by letter 32 with image camera assembly 22.Processor 32 can comprise the standard processor, application specific processor, microprocessor of executable instruction etc., and these instructions can comprise the instruction that is used for receiving depth image, be used for instruction or any other the suitable instruction confirming whether suitable target can be included in the instruction of depth image, be used for suitable Target Transformation is become the skeleton representation or the model of this target.
Capture device 20 can also comprise memory assembly 34, the image that this memory assembly 34 can store the instruction that can be carried out by processor 32, captured by 3-D camera or RGB camera or the frame of image or any other appropriate information, image or the like.According to an example embodiment, memory assembly 34 can comprise random-access memory (ram), read-only storage (ROM), high-speed cache, flash memory, hard disk or any other suitable storage assembly.As shown in Figure 2, in one embodiment, memory assembly 34 can be the independent assembly that communicates with image capture assemblies 22 and processor 32.According to another embodiment, memory assembly 34 can be integrated in processor 32 and/or the image capture assemblies 22.
As shown in Figure 2, capture device 20 can communicate via communication link 36 and computing environment 12.Communication link 36 can be to comprise the wired connection of for example USB connection, live wire connection, Ethernet cable connection etc. and/or the wireless connections that connect etc. such as wireless 802.11b, 802.11g, 802.11a or 802.11n.According to an embodiment, computing environment 12 can provide clock to capture device 20 via communication link 36, can use this clock to determine when and catch for example scene.
In addition, the image that capture device 20 can provide depth information and captured by for example 3-D camera 26 and/or RGB camera 28 to computing environment 12 through communication link 36, and the skeleton pattern that can generate by capture device 20.Computing environment 12 can be used the image of this skeleton pattern, depth information and seizure for example to discern user's posture then and control such as application such as recreation or word processing programs as response.For example, as shown in the figure, in Fig. 2, computing environment 12 can comprise gesture recognition engine 190 and the gesture library 190 that comprises one or more posture filters 192.
Each filter 191 can comprise definition posture and the parameter of this posture or the information of metadata.Can the data of being caught with skeleton pattern and mobile form associated therewith by camera 26,28 and equipment 20 be compared with the posture filter in the gesture library 192, when carry out one or more postures to identify (as represented) user by skeleton pattern.Can comprise such as joint data input, as the formed angle of bone that intersects at joint, from the rgb color data of scene and user's contents such as rate of change in a certain respect about user's joint position such as filter 191 filters such as grade.
For example, comprise a hand from health behind to the throwing of preaxial motion can be implemented as comprise represent the user a hand from the health behind to the posture filter of preaxial mobile information, this moves being caught by degree of depth camera.View data from this scene also can be caught from the RGB camera.Like what mentioned, can be posture parameter is set.In posture is under the situation of throwing, and distance that parameter can be this hand threshold velocity that must reach, this hand must be advanced (absolute, or with respect to user's whole size) and identifier engine are put the letter grading to what this posture taken place.These parameters that are used for posture can change between each context between each is used, in single application or in the context an application in time.
Output from filter 191 can comprise such as the confidence level of just making given posture, make the speed of posture motion and the contents such as time of making the posture motion.These postures can be associated with the various controls of using.Therefore, computing environment 12 can be used gesture recognizers engine 190 to explain moving of skeleton pattern and move based on this and control application.
In an embodiment, the posture filter comprises algorithm, and this algorithm is accepted one or more data about the user as input, and returns the one or more outputs about the posture of correspondence.For example, " user's height " posture filter algorithm can shine upon user's skeleton as input, handles this data, and returns the output of the user's height that is gone out by this algorithm computation.
Computing environment 12 can comprise processor 195, and processor 195 can be handled depth image and confirm that what target is arranged in the scene, like user in the room 18 or object.This can for example realize through the group pixels with share similar distance value in the depth image together.This image also can be resolved the skeleton representation that produces the user, wherein identifies such as the characteristics such as tissue between joint and the joint.Have the skeleton mapping techniques, it uses degree of depth camera to catch the individual, and therefrom confirms a plurality of points on this user's skeleton, the joint of hand, wrist, elbow, knee, nose, ankle, shoulder, and pelvis and the crossing part of vertebra.Other technologies comprise that the manikin that image transitions is behaved is represented and the grid model that image transitions is behaved are represented.
In one embodiment, processing is carried out 20 of capture devices on one's body, and the raw image data of the degree of depth and color (wherein capture device 20 comprises the 3-D camera) value is sent to computing environment 12 via link 36.In another embodiment, handle and carry out, be sent to computing environment 12 through the view data of resolving then by the processor that is coupled to camera 402 32.In another embodiment, the view data of raw image data and warp parsing is sent to computing environment 12.Computing environment 12 can receive through the image of resolving, but it still can receive initial data and carries out active procedure or application.For example, if the image of scene sends to another user through computer network, then computing environment 12 can be sent the initial data that supplies another computing environment to handle.
Computing environment 12 can be used gesture library 192 to explain moving of skeleton pattern and move based on this and control application.Computing environment 12 can be carried out modeling and demonstration to user representing, for example adopts incarnation or the form of pointer on the display such as display device 194.Display device 194 can comprise computer monitor, television screen or any suitable display device.For example, the computer system of camera control can be caught user image data, and on television screen, shows the user aid of the posture that is mapped to the user.User aid can be shown as the incarnation on the screen, shown in Figure 1A and 1B.
Vision helps storehouse 193 can comprise with the vision of one group of posture and helps relevant information.Vision helps storehouse 193 information in order to the visual representation that shows guiding gesture data to be provided to display device 194.For example, display device can show skeleton representation, ghost image or player's incarnation of the posture of can demonstrating.As describe in more detail hereinafter, Fig. 7 A-7D illustrates user aid and comprises that the vision of guiding gesture data helps both, and the two illustrates side by side.Fig. 8 A illustrates the example that the vision on the visual representation of the posture that is superimposed upon the user helps and shows.The example that Fig. 8 B illustrates the vision help of the demonstration that comprises posture shows.Fig. 9 illustrates and is used to get into the example demonstration of exercise mode with the option of reception vision help.Figure 10 illustrates through the network connection and carries out mutual long-distance user, and one of them user provides live vision to help to second user.
Variety of event can trigger the demonstration of the vision help that is used for the posture demonstration.This system can detect the mistake in the user movement, and user movement maybe be not corresponding with the posture of having discerned, and the user can ask exercise to learn given pose etc.Can trigger application to the analysis of user movement provides vision to help to the user, or the option of checking that vision helps is provided, and for example is used for the proper exercise of the throwing gesture that this application discerned with instruction.As stated, to help can be the form of skeleton representation of example such as correct overhand throwing gesture to vision.These parameters and the error identifier that are used for this posture can change between each context between each is used, in single application or in the context an application in time.
Can provide the vision help information to be used for practising, to proofread and correct, to revise or to instruct the user correctly to move with the triggering special exercise, or correctly move in application successful in physical space.For example, the user possibly play boxing game.Computing environment 12 can identify the posture that is applicable to the application of carrying out, and as above colludes fist, and guides display device 194 to provide and comprises that vision how to carry out the indication of colluding fist on this helps.It can be the incarnation that the demonstration on the screen is colluded the proper exercise of fist on this that vision helps.
When the posture filter 191 of discerning, follow the tracks of or identify posture in the gesture library 192 should provide vision to help if also can identifying.For example, when posture is an overhand when throwing, camera 26,28 and equipment 20 with the skeleton pattern be the form data of catching and be associated with it move and can compare with the posture filter in the gesture library 192.The output of filter can be to identify user's (as represented by skeleton pattern) to carry out the overhand throwing gesture.Comprise a hand from health behind to the overhand of preaxial motion throw can be implemented as comprise represent the user a hand from the health behind to the posture filter of preaxial mobile information, this moves being caught by degree of depth camera.
Can indicate the failure in the user's posture for the difference between the set parameter of overhand throwing gesture in user's posture and the filter, and trigger and get into exercise mode and instruct the correct motion of user.Wrong or the variation that detects in the posture that is identified based on filter parameter can trigger the demonstration that vision helps.Can be overhand and throw each parameter that the mistake in the help identifying user posture is set.For example, this system is carrying out the application of baseball.The space body that the overhand throwing gesture can should move user's arm therein is as parameter.If posture filter 191 identifies user's arms and shifts out outside this space body in the overhand throwing that is identified, then this can indicate the mistake in the user movement but not be converted to different gestures.This application maybe be at expectation overhand throwing gesture, because this recreation is in the moment of user to batter's pitching.Fail user's gesture recognition can be triggered the demonstration that vision helps for expection overhand throwing gesture.The posture filter possibly can be the overhand throwing gesture with this gesture recognition no longer, because the parameter that overhand is thrown is no longer satisfied in this motion, but changes the filter parameter that satisfies different gestures into, only takes on throwing like hand.Non-expection between the different gestures changes can trigger the demonstration that vision helps.
Like what mentioned, can be posture parameter is set.For example, be under the situation of throwing in posture, distance that parameter can be this hand threshold velocity that must reach, this hand must be advanced (absolute, or with respect to user's whole size) and identifier engine are put the letter grading to what this posture taken place.Each parameter that can be the mistake in the indication user comment is provided with various threshold values and scope.For example, depend on user's level of skill, the size of the space body that user's arm should pass through can change.
Can change based on user's performance, the application of carrying out, context, level of skill etc. with each posture corresponding parameter.For example, the parameter that the overhand rugby of " new hand " level of skill is thrown can comprise that hand can pass through than the large space body, make this system that overhand is thrown with this posture and carry out relatedly also correspondingly handling it.Through changing the special parameter that is associated with a posture, this system can adapt to the player with less experience.
Some possible postures can be corresponding to this user's posture.For example, the user's posture that in physical space, measures can satisfy the criterion of some filters, and each filter comprises the parameter of possibility posture.The executory failure of the posture that the difference between the data of the posture that measures of expression and the filter parameter of possible posture can indication measurement arrives.
If the data of expression user's posture are not corresponding with any filter parameter of possibility posture, then the user possibly correctly not carry out this posture, because it is relevant with the application of carrying out.If user's posture is not registered as one of each possibility posture, is desirable for this user triggers practice sessions then.
If user's posture is not corresponding with any filter data, then this system can predict the intention of this user's posture.This system can be based on what posture at that time will be applicable to carrying out should be used for prediction expection posture.Between the data of the posture that this prediction can measure based on expression and the filter parameter relatively and sign have the posture of the gesture data that coupling the most closely measures.
The executory failure of the posture that the difference between the data of the posture that measures of expression and the filter parameter of possible posture can indication measurement arrives.This difference can be accepted level with threshold value and compare, and the measures of dispersion that wherein is lower than this threshold value can trigger the vision that has guiding gesture data and help.Threshold value is accepted level can be relevant with the confidence level grading.As stated, the output from filter can comprise such as the things such as confidence level of making given posture.The low confidence grading can be the indication that the user does not correctly make posture.Threshold value is accepted level and can be graded based on confidence level and be provided with.For example, if posture is that overhand is thrown by the gesture recognizers engine identification, and the confidence level grading is low, and then this system can trigger the demonstration of the vision help with guiding gesture data.Perhaps, this system possibly need the high confidence level grading so that exist the user attempting the high confidence of given pose.When the data of the posture that expression measures and the difference between the filter parameter were lower than threshold value and accept level, vision helped and can trigger.
It can be the value that is provided with for the certain filter parameter that threshold value is accepted level.Each parameter of representing the filter of a posture can have threshold value and accept level.It can be single threshold value or acceptable value scope that threshold value is accepted level.If the measurement of user's the posture that measures is not satisfied threshold level or is not dropped in the tolerance interval, show that then it is desirable that the vision that comprises guiding gesture data helps.For example, the overhand threshold value of throwing is accepted level (when it is applicable to the throwing ball sport in the baseball) and can the speed of being set as be equaled 25mph.Therefore, throw if user's posture is identified as overhand, then the speed parameter of the filter thrown of overhand can compare with the speed of user's the posture that measures.If user's the speed that measures does not satisfy 25mph, then this can trigger the demonstration that vision helps, and how correctly to do motion with the instruction user and reaches correct speed.
Depend on context, user, the system based on posture, user's historical data, the historical data of application, improved identity etc., threshold value is accepted level and can be set up, revise or change.Threshold level can be with preferred parameter or with the acceptable measures of dispersion that is associated with range of parameter values that maybe posture.Threshold value is accepted the value of level can be based on user's performance, the application of carrying out, context, level of skill etc.Threshold value is accepted level can be based on single filter parameter or a plurality of filter parameter of given pose.Similarly, this system threshold value that can adapt to a posture is accepted level.For example, threshold level can be modified in time, is for example changed by user, this system or application.
Filter parameter and threshold value are accepted triggering that level can be configured such that vision helps can be not excessively or according to user preference.For example, some users possibly not want any exercise help or user possibly not want the application of interrupting carrying out because of practice sessions, but change the exercise mode of selecting to be used to instruct purpose into.Be not when each difference that exists with the filter data of posture, all to trigger vision to help, this system can determine whether to show guiding gesture data based on various triggerings.
Filter parameter is accepted level with the threshold value that is used for error identification and can be modified, and makes vision only help in useful time trigger.For example, before offering help, the posture that overhand is thrown possibly be identified X time.Therefore, the motion that this system can monitoring user, and do not provide help till specific times has been done in incorrect or vicissitudinous motion.Like this, if made the once throwing of error, it does not trigger.But, if this program identification goes out some things that the user can change, though then user's successful execution this posture, the option that this system also can be provided for practising.For example, under the situation that the overhand that has identified the user is thrown, the change that this system can identify user's low health can produce bigger speed, or the change that user's batting is accomplished in the motion can produce the curve ball motion.
Playing under the situation of throwing game unfamiliar user or novice users, the filter parameter of definition space body can be bigger, thereby before the vision that triggers this posture helps, allow bigger wrong surplus.When the user did not satisfy the certain skills level, this user also can ask to provide vision to help.The user can ask vision help to help the level of skill that reaches higher.The level of skill that depends on the user, the threshold value of parameter can change.For example, for the new hand player, can have bigger acceptance error surplus but go out the space body that fist or overhand throw.Therefore, depend on exercise type that the parameter of application, user-selected setting, this application are available etc., the triggering that gets into the triggering of exercise mode or provide vision to help can change.
Vision helps storehouse 193 to comprise to provide to the access right of storage and handles each module of the vision help information that is used for posture is demonstrated.Vision helps to be exclusively used in application, user or given pose.Some postures are applicable to application-specific, and wherein the same posture in the Another Application causes Different control.For example, in an application, waving can be the posture that expression is circled in the air; In Another Application, lifting both arms and swing back and forth lentamente can be the posture that expression is circled in the air.
Can take any suitable form with the vision relevant information of visual feedback in the storehouse 193 that helps.In an example embodiment, it is the form of pre-recorded content that vision helps, and promptly this content is a content recorded during using this content stage before.Pre-recorded content module 196 can write down or store the pre-recorded content of the demonstration of supporting the demonstration posture, like the prerecord audio or video relevant with a posture.Pre-recorded content module 196 can be provided for being connected to network and receive the technology of prerecord pose information from remote server or via on-line customer's computing environment.Pre-recorded content module 196 can be handled pre-recorded content provides the vision of this posture to help, like skeleton representation, ghost image or the player's incarnation through this posture of demonstration demonstration.
Pre-recorded content can be exclusively used in application, user, posture, or pre-recorded content goes for respectively using or being applicable to each user's combination.Pre-recorded content can be that the vision that is packaged in application-specific helps the pose information in the storehouse 193.For example, be that tennis game is used if use, then this application can comprise that the vision with the prerecord pose information that is used to show the relevant posture of tennis helps the storehouse.
Pre-recorded content can comprise the content of user record, and wherein user's posture of selecting to write down he or herself is checked or used after a while being used to.For example, the user possibly successfully make special exercise and write down it in tennis game, makes this user can visit this pre-recorded content after a while to check this prerecord posture.The user can check subsequently he or herself be used for the user in the past success the prerecord posture of demonstration of posture.The user can write down this system and can be used for the demonstrate posture content of posture to unfamiliar user.For example, father and mother's posture that can write down the success of he or herself is looked back for child.This child can check and use this prerecord posture exercise after a while, with the proper exercise of learning in physical space, to make.
In another example embodiment, it is the form of live content that vision helps, and promptly helps relevant any information with the real-time vision that a posture is provided.Show the demonstration of the visual representation that refers to posture or the demonstration that vision helps in real time, wherein this demonstration and the execution of this posture in physical space simultaneously or almost simultaneously show.Real-time quoting comprised execution, and wherein inessential processing delay causes minimum display delay or not visible as far as the user.Therefore, comprise in real time with by the required time delay of automatic data processing the relevant any inessential delay of promptness of data.
Live content module 197 can be provided for the technology that receives, handle and transmit live content.For example, live content module 197 can be provided for being connected to network and receive the technology of the live feed that comprises pose information from remote server or from on-line customer's computing environment.Live content module 197 can be handled the live feed that comprises pose information, with the demonstration of real-time demonstration posture.For example, long-distance user's posture of can demonstrating, wherein relevant with this long-distance user's posture information transmit through network and are received by local user's computing environment.Local computing environment can be handled should the live telecast feed, as via the live content module, and shows in real time that to the local user vision helps.Vision help can be the playback or the live expression of user's posture, representes via this user's visual representation.Live content module 197 can show in any suitable manner that vision helps, like skeleton representation, ghost image or the player's incarnation through this posture of demonstration demonstration.
Live content can be exclusively used in to be used or the user, or live content goes for respectively using or being applicable to each user's combination.For example, for application-specific, can support that wherein the live customer support provides the live feed of pose information from remote computation environment access live customer.Live content module 197 can receive live feed and when the user makes posture to user real time provide the vision of this posture of expression to help.
In another example that live telecast helps, the user can remotely connect or network, and makes a plurality of users to carry out alternately via their computing environment separately.Second user that first user can identify the distant place that is positioned at first user is carrying out posture improperly.First user can connect the demonstration that this posture is provided to this user via network.Second user's computing environment for example receives the information relevant with the posture of being demonstrated and provides vision to help to second user via live content module 197.Therefore, first user can provide the proper exercise of live pose information to help second user learning in physical space, to make to second user.
Gesture recognition engine 192, gesture library 190 and vision help storehouse 193 can use hardware, software or both combinations to realize.For example, gesture recognition engine 192, gesture library 190 and vision help storehouse 193 can be implemented as computing environment such as processors such as processors 195 on, on the processor 32 of capture device 20, on the processing unit 101 of Fig. 3 or the software of carrying out on the processing unit 259 of Fig. 4.
Be stressed that the block diagram of describing among Fig. 2-4 is exemplary, and be not intended to hint that one specifically realizes.Thus, the processing unit 259 of the processing unit 101 of the processor 195 or 32 of Fig. 1, Fig. 3 and Fig. 4 can be implemented as single processor or a plurality of processor.A plurality of processors can be located distributed or centralizedly.For example, gesture recognition engine 190 can be implemented as the software of on the processor 32 of capture device, carrying out, and gesture library and vision help storehouse 193 can be implemented as the software of carrying out on the processor 195 in computing environment.Conceived the combination in any that is applicable to the processor of carrying out technology disclosed herein.A plurality of processors can be wirelessly, make up via hardwired or with it and communicate by letter.
Fig. 2 has separately described capture device 20 and computing environment 12, can carry out function shown in Figure 2 but conceived the system that comprises any amount of equipment.For example, this computing environment can merge in the capture device 20, makes capture device can take on the individual unit with one or more processors.Therefore, although computing environment 12 is separated description with capture device 20 in this article, this is for purposes of illustration.Can use any suitable device, system or the equipment that to carry out disclosed technology and the combination of system.
Fig. 3 illustrates the example embodiment that the computing environment 12 that can be used for realizing Fig. 2 is come the computing environment of the one or more postures in objective of interpretation identification, analysis and the tracking system.As shown in Figure 3, this computing environment can be such as multimedia consoles such as game console 100.Also as shown in Figure 3, multimedia console 100 has the CPU (CPU) 101 that contains on-chip cache 102, second level cache 104 and flash rom (read-only storage) 106.On-chip cache 102 is with second level cache 104 temporary storaging datas and therefore reduce number of memory access cycles, improves processing speed and handling capacity thus.CPU 101 can be arranged to have more than one kernel, and additional firsts and seconds high-speed cache 102 and 104 thus.The executable code that loads during the starting stage of bootup process when flash rom 106 can be stored in multimedia console 100 energisings.
The Video processing streamline that GPU (GPU) 108 and video encoder/video codec (encoder/decoder) 114 are formed at a high speed and high graphics is handled.Transport data from GPU 108 to video encoder/video codec 114 via bus.The Video processing streamline is used to transfer to TV or other displays to A/V (audio/video) port one 40 output data.Memory Controller 110 is connected to GPU 108 making things convenient for the various types of memories 112 of processor access, such as but be not limited to RAM (random access memory).
Multimedia console 100 comprises preferably the I/O controller 120 on module 118, realized, System Management Controller 122, audio treatment unit 123, network interface controller 124, a USB master controller 126, the 2nd USB controller 128 and front panel I/O subassembly 130. USB controller 126 and 128 main frames as peripheral controllers 142 (1)-142 (2), wireless adapter 148 and external memory equipment 146 (for example flash memory, external CD/DVD ROM driver, removable medium etc.).Network interface 124 and/or wireless adapter 148 provide the visit of network (for example, internet, home network etc.) and can be comprise in the various wired or wireless adapter assembly of Ethernet card, modem, bluetooth module, cable modem etc. any.
Provide system storage 143 to be stored in the application data that loads during the bootup process.Media drive 144 is provided, and it can comprise DVD/CD driver, hard disk drive or other removable media drivers etc.Media drive 144 can be built-in or external to multimedia controller 100.Application data can be via media drive 144 visit, with by multimedia console 100 execution, playback etc.Media drive 144 is connected to I/O controller 120 via connect buses such as (for example IEEE 1394) at a high speed such as serial ATA bus or other.
System Management Controller 122 provides the various service functions that relate to the availability of guaranteeing multimedia console 100.Audio treatment unit 123 forms the corresponding audio with high fidelity and stereo processing with audio codec 132 and handles streamline.Voice data transmits between audio treatment unit 123 and audio codec 132 via communication link.The Audio Processing streamline outputs to A/V port one 40 with data and reproduces for external audio player or equipment with audio capability.
Front panel I/O subassembly 130 supports to be exposed to power knob 150 and the function of ejector button 152 and any LED (light emitting diode) or other indicators on the outer surface of multimedia console 100.System's supply module 136 is to the assembly power supply of multimedia console 100.Circuit in the fan 138 cooling multimedia consoles 100.
Each other assemblies in CPU 101, GPU 108, Memory Controller 110 and the multimedia console 100 are via one or more bus interconnection, comprise serial and parallel bus, memory bus, peripheral bus and use in the various bus architectures any processor or local bus.As an example, these frameworks can comprise peripheral component interconnect (pci) bus, PCI-Express bus etc.
When multimedia console 100 energisings, application data can be loaded into memory 112 and/or the high-speed cache 102,104 and at CPU 101 from system storage 143 and carry out.The graphic user interface that application can be presented on provides consistent when navigating to different media types available on the multimedia console 100 user experiences.In operation, the application that comprises in the media drive 144 and/or other medium can start or broadcast from media drive 144, to multimedia console 100 additional function to be provided.
Multimedia console 100 can be operated as autonomous system through this system is connected to television set or other displays simply.In this stand-alone mode, multimedia console 100 allows one or more users and this system interaction, sees a film or listen to the music.Yet, integrated along with the broadband connection that can use through network interface 124 or wireless adapter 148, multimedia console 100 also can be used as than the participant in the macroreticular community and operates.
When multimedia console 100 energisings, the hardware resource that can keep set amount is done system's use for multimedia console operating system.These resources can comprise memory reservation amount, and (for example, 16MB), CPU and GPU cycle reservation amount (for example, 5%), network bandwidth reservation amount are (for example, 8kbs) etc.Because keep when these resources guide in system, so institute's resources reserved says it is non-existent from application point of view.
Particularly, memory keeps preferably enough big, starts kernel, concurrent system application and driving to comprise.The CPU reservation is preferably constant, makes that then idle thread will consume any untapped cycle if the CPU consumption that is kept is not used by system applies.
Keep for GPU, interrupt dispatching code so that pop-up window is rendered as coverage diagram, show the lightweight messages (for example, pop-up window) that generates by system applies through using GPU.The required amount of memory of coverage diagram depends on overlay area size, and coverage diagram preferably with the proportional convergent-divergent of screen resolution.Use under the situation of using complete user interface the preferred resolution ratio that is independent of application resolution of using at concurrent system.Scaler can be used for being provided with this resolution ratio, thereby need not to change frequency and cause that TV is synchronous again.
After multimedia console 100 guiding and system resource are retained, systemic-function is provided with regard to the execution concurrence system applies.Systemic-function is encapsulated in the group system application of carrying out in the above-mentioned system resource that keeps.Operating system nucleus sign is system applies thread but not the thread of games application thread.System applies preferably is scheduled as at the fixed time and moves on CPU 101 with predetermined time interval, so that for using the system resource view that provides consistent.Dispatch is to be interrupted minimizing by the caused high-speed cache of the games application of on console, moving for handle.
When concurrent system application need audio frequency, then Audio Processing is dispatched to games application asynchronously owing to time sensitivity.Multimedia console application manager (being described below) is controlled the audio level (for example, quiet, decay) of games application when the system applies activity.
Input equipment (for example, controller 142 (1) and 142 (2)) is shared by games application and system applies.Input equipment is not a reservation of resource, but between system applies and games application, switches so that it has the focus of equipment separately.Application manager is preferably controlled the switching of inlet flow, and need not to know the knowledge of games application, and drives the status information of safeguarding that relevant focus is switched. Camera 26,28 and capture device 20 can be the extra input equipment of console 100 definition.
Fig. 4 shows another example embodiment of computing environment 220, and this computing environment can be used to realize the computing environment 12 of the one or more postures that are used for objective of interpretation identification, analysis and tracking system shown in Figure 1A-2.Computingasystem environment 220 is an example of suitable computing environment, and is not intended to the scope of application or the function of disclosed theme are proposed any restriction.Should computing environment 220 be interpreted as yet the arbitrary assembly shown in the exemplary operation environment 220 or its combination are had any dependence or requirement.In certain embodiments, the various calculating elements of being described can comprise the circuit that is configured to instantiation each concrete aspect of the present invention.For example, the terms circuit of using in the disclosure can comprise the specialized hardware components that is configured to carry out through firmware or switch function.In other examples, terms circuit can comprise by the General Porcess Unit of the software instruction configuration of the logic of implementing to can be used for to carry out function, memory etc.Comprise that at circuit in the example embodiment of combination of hardware and software, the implementer can write the source code that embodies logic, and source code can be compiled as the machine readable code that can be handled by General Porcess Unit.Because those skilled in the art can understand prior art and evolve between hardware, software or the hardware/software combination and almost do not have the stage of difference, thereby select hardware or software to realize that concrete function is the design alternative of leaving the implementor for.More specifically, those skilled in the art can understand that software process can be transformed into hardware configuration of equal value, and hardware configuration itself can be transformed into software process of equal value.Therefore, realize still being that the selection that realizes of software is design alternative and leaves the implementor for for hardware.
In Fig. 4, computing environment 220 comprises computer 241, and computer 241 generally includes various computer-readable mediums.Computer-readable medium can be can be by any usable medium of computer 241 visit, and comprises volatibility and non-volatile media, removable and removable medium not.System storage 222 comprises the computer-readable storage medium of volatibility and/or nonvolatile memory form, like read-only storage (ROM) 223 and random-access memory (ram) 260.Basic input/output 224 (BIOS) comprises the basic routine such as transmission information between the element that helps between the starting period in computer 241, and the common stored of basic input/output 224 (BIOS) is in ROM 223.But RAM 260 comprises processing unit 259 zero accesses usually and/or at present by the data and/or the program module of processing unit operation.And unrestricted, Fig. 4 shows operating system 225, application program 226, other program modules 227 and routine data 228 as an example.
Computer 241 also can comprise other removable/not removable, volatile/nonvolatile computer storage media.Only as an example; Fig. 4 shows and reads in never removable, the non-volatile magnetizing mediums or to its hard disk drive that writes 238; From removable, non-volatile magnetic disk 254, read or to its disc driver that writes 239, and from such as reading removable, the non-volatile CDs 253 such as CD ROM or other optical mediums or to its CD drive that writes 240.Other that can in the exemplary operation environment, use are removable/and not removable, volatile/nonvolatile computer storage media includes but not limited to cassette, flash card, digital versatile disc, digital recording band, solid-state RAM, solid-state ROM etc.Hard disk drive 238 usually by interface 234 grades for example not the removable memory interface be connected to system bus 221, and disc driver 239 is connected to system bus 221 by for example interface 235 interfaces such as removable memory such as grade usually with CD drive 240.
More than discuss and be that computer 241 provides the storage to computer-readable instruction, data structure, program module and other data at driver shown in Fig. 4 and the computer-readable storage medium that is associated thereof.In Fig. 4, for example, hard disk drive 238 is illustrated as storage operating system 258, application program 257, other program modules 256 and routine data 255.Notice that these assemblies can be identical with routine data 228 with operating system 225, application program 226, other program modules 227, also can be different with them.Be given different numberings at this operating system 258, application program 257, other program modules 256 and routine data 255 at this, they are different copies at least with explanation.The user can pass through input equipment, such as keyboard 251 and pointing device 252 (being commonly referred to as mouse, tracking ball or touch pads) to computer 241 input commands and information.Other input equipment (not shown) can comprise microphone, control stick, game paddle, satellite dish, scanner etc.These are connected to processing unit 259 through the user's input interface 236 that is coupled to system bus usually with other input equipments, but also can be by other interfaces and bus structures, and for example parallel port, game port or USB (USB) connect.Camera 26,28 and capture device 20 can be the extra input equipment of console 100 definition.The display device of monitor 242 or other types also is connected to system bus 221 through the interface such as video interface 232.Except that monitor, computer can also comprise can be through other peripheral output equipments such as loudspeaker 244 and printer 243 of output peripheral interface 233 connections.
The logic that computer 241 can use one or more remote computers (like remote computer 246) connects, in networked environment, to operate.Remote computer 246 can be personal computer, server, router, network PC, peer device or other common network nodes; And generally include many or all are above about computer 241 described elements, but in Fig. 4, only show memory storage device 247.Logic depicted in figure 2 connects and comprises Local Area Network 245 and wide area network (WAN) 249, but also can comprise other networks.These networked environments are common in office, enterprise-wide. computer networks, Intranet and internet.
When being used for the lan network environment, computer 241 is connected to LAN 245 through network interface or adapter 237.When in the WAN networked environment, using, computer 241 generally includes modem 250 or is used for through setting up other means of communication such as WAN such as internet 249.Modem 250 can be built-in or external, can be connected to system bus 221 via user's input interface 236 or other suitable mechanism.In networked environment, can be stored in the remote memory storage device with respect to computer 241 described program modules or its part.And unrestricted, Fig. 4 shows remote application 248 and resides on the memory devices 247 as an example.It is exemplary that network shown in should be appreciated that connects, and can use other means of between computer, setting up communication link.
Can store the people's who is used for scanning the scene of catching instruction on the aforementioned calculation machine readable storage medium storing program for executing.This computer executable instructions can comprise the instruction that is used to receive the depth image of physical space and presents the vision help of the guiding gesture data of representing a posture, and wherein this depth image comprises the data of representing this posture.Can store the instruction that is used to determine whether to provide guidance data on the aforementioned calculation machine readable storage medium storing program for executing.These instructions can comprise the view data (wherein this view data comprises the expression of this posture) that receives a scene; The data of this posture of expression and at least one output of posture filter are compared; Difference (wherein this difference is indicated the executory failure of this posture) between the data that detect this posture of expression and this at least one output of posture filter, and determine whether to provide guiding gesture data based on this difference.
The user's that Fig. 5 A has described to generate from capture device 20 example skeleton mapping.In this embodiment; Identify each joint and bone: the top 526 and the bottom 528 of each hand 502, each forearm 504, each elbow 506, each biceps 508, each shoulder 510, each hip 512, each thigh 514, each knee 516, each shank 518, each foot 520,522, trunk 524, vertebra, and waist 530.Following the tracks of under the situation of multiple spot more, can identify additional characteristic, such as the bone and the joint of finger or toe, or each characteristic of face, like nose and eye.
The user can create posture through the health that moves him.Posture comprises user's motion or attitude, and it can be captured as view data and resolve its meaning.Posture can be dynamic, comprises motion, like the imitation pitching.Posture can be a static attitude, intersects like trunk 524 fronts a people and holds his forearm 504.Posture can be single moving (for example, jumping) or continuous posture (for example, driving), and on the duration, can lack and can grow (for example, driving 20 minutes).Posture also can combine stage property, as through brandishing imitated sword.Posture can comprise more than a body part, as clapping both hands 502, or more small motion, as sticked up a people's lip.
User's posture can be used as the input in the general computational context.For example, the various motions of hand 502 or other body parts can be corresponding to common system-level task, as in hierarchical lists, navigate up or down, open file, close file and preservation file.For example, the user can upwards refer to and the centre of the palm makes his hand maintenance motionless to capture device 20 with finger.He can draw finger in then and form fist towards palm, and this can be that indication should pent posture based on the focus window in the user interface computing environment of window.Posture also can depend on to play in the video-game specific context to be used.For example, for driving recreation, the various motions of hand 502 and pin 520 can be corresponding to operating and controlling vehicle, gearshift, acceleration and brake on a direction.Thus, posture can indicate be mapped to that the user that shown representes, such as the various motions in the wide variety of applications such as video-game, text editor, word processing, data management.
The user can walk in the original place or run and generate the posture corresponding to walking or running through oneself.For example, the user can alternatively mention and put down each leg 512-520 and comes to plan to implement away at the situation counterdie that does not move.System can resolve this posture with each thigh 514 through analyzing each hip 512.When a hip-thigh angle (like what measure with respect to vertical line, the leg of wherein standing has hip-thigh angle of 0 °, and the leg of horizontal stretching has hip-thigh angle of 90 ° forward) surpasses the specific threshold with respect to another thigh, can discern a step.Walk or run and after the continuous step of a certain quantity that replaces leg, to be identified.Time between two nearest steps can be considered to one-period.Do not satisfying after threshold angle reaches the cycle of a certain quantity, system can confirm that the walking or the posture of running stop.
Given " walk or run " posture is used and can be the pre-set parameter that is associated with this posture.These parameters can comprise that the periodicity that the step does not take place and definite posture of the required step number of above-mentioned threshold angle, initiation walking or the posture of running, recovery are walking or the threshold period of running.Fast period can be corresponding to running, because the user will move his leg apace, and the slower cycle can be corresponding to walking.
Posture can be associated with one group of default parameters at first, uses available its oneself parameter and covers this group default parameters.In this scene, not forcing to use provides parameter, can change the default parameters that uses a group to allow identification posture under the situation of the parameter that does not have application definition into but use.The information relevant with posture can be stored the purpose of the posture animation that is used for prerecording.
The various outputs that can be associated with posture are arranged.Can be relevant for posture occurent baseline " be or not " whether.Level of confidence can also be arranged, and it is corresponding to the possibility corresponding to posture that moves of usertracking.This can be that scope is the lineal scale of the floating number that (comprises end points) between 0 and 1.Can not accept in the false application as input certainly in the application that receives this pose information, it can only use has high confidence level, the posture of having discerned as at least 0.95.Under the situation of using each instance that must the identification posture, even be cost certainly with vacation, it can use the posture that has much lower level of confidence at least, as only greater than those postures of 0.2.Posture can have two outputs of the time between the step recently, and under the situation of only having registered the first step, this can be set as retention, like-1 (because the time between any two steps just is necessary for).Posture also can have the output about the highest thigh angle that during a nearest step, reaches.
Another exemplary posture is " heel is mentioned jumping ".In this posture, the user can be through lifting from ground with his heel, creates this posture but keep his toe to land.Alternatively, the user can jump in the air, wherein his pin 520 complete built on stilts.Whether this system can resolve the skeleton of this posture through the angular relationship of analyzing shoulder 510, hip 512 and knee 516, are the aligned positions that equal upright to check them.Then, can keep watch on these points with higher 526 and low 528 vertebras point find any upwards acceleration.Enough acceleration combinations can trigger the jump posture.Enough combinations of acceleration and a certain posture can be satisfied the parameter of transition point.
Given " heel the is mentioned jumping " posture of being somebody's turn to do is used and can be the pre-set parameter that is associated with this posture.Parameter can comprise above-mentioned acceleration threshold value, and it confirms certain combination of user's shoulder 510, hip 512 and knee 516 must move up how soon trigger this posture; And the maximum alignment angle of takeing on 510, still can trigger jump between hip 512 and the knee 516.Output can comprise level of confidence, and the body angle of user when jumping.
Based on the details of the application that will receive posture come for the posture setup parameter for identifying posture exactly, be important.The intention that correctly identifies posture and user greatly helps to create positive user and experiences.
Application can identify the point of the animation that uses prerecording for the parameter setting values that is associated with various transition points.Transition point can be defined by various parameters, like angle or its any combination of sign, speed, target or the object of a certain posture.If transition point is defined by the sign of a certain posture at least in part, then correctly identify posture and help to improve the confidence level that the parameter of transition point has been satisfied.
Another parameter for posture can be the distance that moves.Under the situation of the action of the incarnation in user's ability of posture control virtual environment, this incarnation can be the length that arm leaves ball.If the user hopes mutual and catch it with this ball, then this arm 502-510 that can require the user to stretch him makes grip posture simultaneously to total length.In this situation, the similar grip posture that the user only partly stretches his arm 502-510 possibly can't reach the result mutual with ball.Similarly, the parameter of transition point can be the sign to grip posture, if wherein the user only partly stretches his arm 502-510, thereby does not have to realize and the mutual result of ball that then user's posture will not satisfy the parameter of transition point.
The space body that posture or its part can must it take place therein is as parameter.Comprise that in posture this space body can be expressed with respect to health usually under the situation that health moves.For example, for the rugby throwing gesture of right-handed user can be only be not less than right shoulder 510a and with the space body of throwing arm 502a-310a in 522 same side in discern.All borders that maybe unnecessary definition space body as for this throwing gesture, are wherein kept from the outside border of health and are not defined, and this space body ad infinitum stretches out, and perhaps extend to the edge of the scene of just being kept watch on.
Fig. 5 B provides the further details of an exemplary embodiment of the gesture recognizers engine 190 of Fig. 2.As shown in the figure, gesture recognizers engine 190 can comprise at least one filter 518 that is used for confirming one or more postures.Filter 518 comprises the information of definition posture 526 (below be called " posture "), and can comprise at least one parameter 528 or the metadata that is used for this posture.For example, comprise that throwing that a hand is crossed preaxial motion behind from health can be implemented as and comprise that a hand representing the user crosses the posture 526 of preaxial mobile information behind from health that this moves being caught by degree of depth camera.Can be this posture 528 setup parameters 526 then.In posture 526 is under the situation of throwing, and distance that parameter 528 can be this hand threshold velocity that must reach, this hand must be advanced (absolute, or with respect to user's whole size) and identifier engine are to the confidence level grading of posture takes place.These parameters 526 of posture 528 can change along with the time between each context between each application, in single application or in the context an application.
Filter can be modular or interchangeable.In one embodiment, filter has a plurality of inputs and a plurality of output, and each in these inputs has a type, and each in these outputs has a type.In this situation, first filter can be with having other aspects of replacing with second filter of the input and output of the first filter equal number and type and not changing the identifier engine architecture.For example, possibly have first filter that will drive, this first filter with skeleton data as input, and the output occurent confidence level of posture and the steering angle that are associated with this filter.Drive filter in hope with second and replace this and first drive under the situation of filter (this possibly be because second to drive filter more efficient and need processing resource still less); Can come to do like this through replacing first filter with second filter simply, as long as second filter has same input and output---input of skeleton data type and two outputs of confidence level type and angular type.
Filter need not have parameter.For example, " user height " filter that returns user's height possibly not allow any parameter that can be conditioned." user's height " filter that substitutes can have customized parameter, such as the footwear, hair style, headwear and the figure that when confirming user's height, whether consider the user.
Can comprise such as joint data, as the formed angle of bone that intersects at joint, from the rgb color data of scene and user's contents such as rate of change in a certain respect the input of filter about user's joint position.Output from filter can comprise such as the confidence level of just making given posture, make the speed of posture motion and the contents such as time of making the posture motion.
Context can be a cultural context, and can be environmental context.Cultural context refers to the user's of using system culture.Different cultural can use similar posture to give significantly different implication.For example, hope to make another user " see " or the U.S. user of " using his eyes " can be placed on his forefinger on his head far-end near his eyes.Yet as far as Italian user, this posture can be interpreted as quoting Mafia.
Similarly, among the varying environment of single application, has different contexts.To relate to the first person shooting game of operating motorcycle is example.When the user is walking, will point and clench fist towards ground and can express the fist posture forward and from the protruding fist of health.When the user was driving in the context, identical motion can be represented " gearshift " posture.Possibly also have one or more menu environment, wherein the user can preserve his recreation, between his personage's equipment, select or carry out similarly not comprise the action of directly playing games.In this environment, this recreation posture can have the 3rd implication, as selects some thing or advance to another screen.
Filter can move side by side, and a plurality of filter can be sought identical things but different execution.Thereby, depending on the arrangement of the motion of definable posture, the quantity of filter can increase.For example, curve ball can be the overhand posture, but the player prefers pitching underhand (when they do under the very successful situation like this) subsequently.
Gesture recognizers engine 190 can have the basic identifier engine 518 that function is provided to posture filter 516.In one embodiment; The function that identifier engine 516 is realized comprises that the input in time (input-over-time) of posture that tracking has been discerned and other inputs is filed, hidden Markov model realize (wherein modeling system be assumed that have unknown parameter Markov process-wherein current state has encapsulated the required any past state information of definite state in future; Therefore needn't safeguard the process of any other past state information for this purpose, but and hiding parameter confirm from observed data) and other required functions of particular instance of finding the solution gesture recognition.
Filter 518 loads on basic identifier engine 516 and realizes, and engine capable of using 516 offers the service of all filters 518.In one embodiment, the received data of basic identifier engine 516 processing confirm whether it satisfies the requirement of any filter 518.Because these wait the service that provided to provide but not provided by each filter 518 by basic identifier engine 516 is disposable such as resolving input; Therefore this service only need be processed once in a period of time rather than this time period is handled once each filter 518, has reduced the required processing of definite posture thus.
The filter 518 that application can use gesture recognizers engine 190 to be provided, perhaps it can provide its oneself filter 518, and this filter is inserted in the basic identifier engine 516.In one embodiment, all filters 518 have the general-purpose interface of launching this insertion characteristic.In addition, therefore all filter 518 parameters 528 capable of using can use the single posture instrument that is described below to diagnose and regulate whole filter system 518.
These parameters 528 can be regulated for the context of using or use by posture instrument 520.In one embodiment, posture instrument 520 comprises that the figure of a plurality of slide blocks 522 and health 524 representes, each slide block 522 is corresponding to a parameter 528.When adjusting parameter 528 with corresponding slide block 522, health 524 can be showed and will be identified as the action of posture and use these parameters 528 will not be identified as the action of posture with these parameters 528, like what identified.The visual effective means that provides debugging and refinement to regulate posture of this of the parameter 528 of posture.
The example of the application of on the computing environment shown in Fig. 6 A-6C, carrying out is the boxing game that the user is playing.Fig. 6 A-6C representes that the user goes out fist posture 62a, 62b, 62c in physical space, and wherein user's visual representation 64a, 64b, 64c are mapped to user's in the physical space motion.Among Fig. 6 A-6C each is illustrated in the user make out the fist posture during the user at position 62a, 62b, the 62c of three discrete time points in physical space and example 64a, 64b, the 64c that is displayed on the user's in the application space visual representation.The speed of seizure and display image data frame is confirmed the continuity level of the motion that is shown of visual representation.Though can catch and show additional image data frame, the frame of describing among Fig. 6 A-6C is that the purpose of property is presented for purpose of illustration selected.
The motion that the user makes can be caught, analyzes and followed the tracks of to capture device 20 in physical space, like user's shake one's fists posture 62a, 62b, 62c.According to an example embodiment, capture device 20 can be configured to catch the video that has depth information, comprises the depth image of scene.Capture device 20 can provide the depth information and the image of being caught and can be shown for audiovisual assembly 16 by the skeleton pattern that capture device 20 generates.The user can check this image or user's visual representation 64a on the audio-visual equipment such as television screen 16 shown in Fig. 6 D.
Audio-visual equipment 16 can provide the user can use his or her visual representation 64a, 64b, the 64c that moves player's incarnation of controlling.User's motion can be mapped to the visual representation in the application space, in using, to carry out one or more controls or action.For example, can use capture device 20 to follow the tracks of user 18, so that can user 18 posture be interpreted as the control that can influence the application of carrying out by this computing environment.In boxing game was used, the posture that the posture 62a of the shake one's fists forms of motion of user in physical space, 62b, 62c control visual representation 64a, 64b, 64c have done made in gamespace, to go out fist.Therefore; According to an example embodiment; The computing environment of Target Recognition, analysis and tracking system 10 and capture device 20 can be used for discerning and analysis user 18 goes out fist in physical space, can be interpreted as the game control to the player's incarnation 40 in the gamespace thereby make this go out fist.Visual representation can be mapped to user's posture and come to show in real time with respect to the execution of this posture in physical space.
In some cases, the vision help that provides expression to be used to instruct the user how correctly to make the guiding gesture data of posture is desirable.The directiveness gesture data can be instructed the user how correctly to make posture and controlled the application of carrying out.In an example embodiment, wrong in can the posture of identifying user of computing environment or the application carried out also provides guiding gesture data give prominence to these mistakes of demonstration.In another example embodiment, the user selects practice sessions or gets into exercise mode to learn how correctly to make posture.
Fig. 7 A-C describes to be used for to come the posture of playback user and show the example split screen screenshot capture of guiding gesture data side by side with this incarnation as incarnation.The left side of each split screen is presented on visual representation 64a, 64b, the 64c of the user's that the user shakes one's fists during the posture at three discrete time points posture, shown in Fig. 6 A-6C.The right side of each split screen presents the example embodiment that the vision of expression and the corresponding guiding gesture data 74a of each 64a, 64b, 64c of image data frame, 74b, 74c helps.Fig. 7 D describes the display part from the system 10 of Figure 1A and 1B, and it illustrates the example embodiment that is used for split-screen display.In this example, the left side of each among Fig. 7 A-7C is corresponding to the snapshot or the image data frame that are captured by degree of depth camera, parallel RGB camera, or corresponding to the image from these two kinds of cameras combinations.On the right side of split screen, this system shows that the vision of the mistake in the posture of giving prominence to explicit user helps.
It can be the demonstration of proper exercise that vision helps.For example, user practice sessions and this system that can select to be used to fist can start the specific interactive practice sessions that goes out the proper exercise of fist of this user of instruction.This system can help the motion (live or playback) of explicit user with vision, thus any mistake in the motion of outstanding explicit user.Mistake in the posture of the outstanding explicit user of various Display Techniques.This help can be described user's physical location and the increment between the ideal position.For example, the part of the location of expression user in physical space of the health 64a of the arrow points incarnation among Fig. 7 B and the 7C, 64b, 64c, thus be illustrated in this moment user's location and desirable postural position 74a, 74b, the difference between the 74c.
The vision that this system can be provided for practising purpose helps, and is used to control this system or the appropriate exercise of the application carried out with demonstration.Exercise can be the part of user's exercise mode of initiatively selecting to get into.The user also can ask exercise, when using as carrying out first the user.Variety of event can trigger the demonstration of the vision help that is used for the posture demonstration.This system can detect wrong in the user movement or with the deviation of desired movement.This system can identify can be modified in this posture and in the application of carrying out, reach higher successful user's locating area.User's posture is not and is discerned the corresponding posture of posture, thereby indicates this user to be unfamiliar with suitable posture.The measurable user's of this system expection posture, and be provided for training this user to come correctly to make the option of this posture.This system can discern unfamiliar user and exercise is provided.In these incidents any can trigger the demonstration of the vision help that comprises guiding gesture data.
This system or application can comprise exercise mode and execution pattern.Result as such as triggerings such as above-mentioned triggerings can get into exercise mode automatically.For example, if user 18 shoots in physical space not and discerns that posture is corresponding to go out fist, then system 10 can interrupt this application, gets into exercise mode and provide the vision of the suitable posture of demonstration to help to the user.
This system can come the mistake in the identifying user motion through the position of assesses user in single Frame of catching or in series of frames.For example, the fist motion is colluded on can being with user's gesture recognition by this system, by identifying with respect to moving in user's the particular space body.This system also can be identified in the model image Frame, and the user does not correctly move their arm or can in boxing game, provide more ten-strike to the change of user movement in physical space.
Whether this system can identify mistake based on some parameter of user's the posture that measures, as dropping on outside the expection space body with the deviation of ideal velocity scope or user's the posture that measures.For example, baseball is used and can be instructed the user to make the overhand throwing to come the pitching to the batter.Degree of depth camera or RGB camera can the motion of identifying user in physical space each side, and posture filter 191 can be designated this motion and is in the throwing gesture class.Based on the space body of being kept watch on around user's head, the parameter of all kinds of throwing gestures can be different.It can be that user's head the place ahead is with the rear but the space body on user's throwing is takeed on that overhand is thrown.Hand is only takeed on throwing and can be defined by the space body at user's waist the place ahead and rear, and this space body is between shoulder and user's waist.The posture filter can identify overhand based on each parameter of the motion of user in physical space and throw.Because the time point in application, carried out (for example, the user is in baseball this user in using and is instructed the time point to batter's pitching), the posture filter possibly expect that overhand throws (or the overhand of attempting is thrown).With the deviation of filter parameter or fail to satisfy threshold value and accept level and can trigger the demonstration that vision helps.
The demonstration that comprises the vision help of guiding gesture data can be taked any suitable form.In this embodiment, guiding gesture data 74a, 74b, 74c are shown as stick figure and represent, can take any suitable expression but conceived visual representation.For example, the posture that the part of the outstanding demonstration of arrow or user's visual representation can indicating correct is shown in Fig. 7 A-7D.Shown in Fig. 7 A-7D, this demonstration can be the expression side by side of user's posture and desirable posture equally.Vision help can be the form (to catch speed, to slow down speed, fast velocity etc.) of the motion of playback or shown in real time.Shown in Fig. 8 A, represent that the image of guiding gesture data can ghost image or be superimposed upon on the visual representation.
Snapshot or image data frame that the left side of each among Fig. 7 A-7C can be directly captures corresponding to degree of depth camera and/or RGB camera.Selected each image data frame can be with any speed playback, and can checked or check as the screen capture that separates with coming one time one frame with the continuous playback of motion.Image data frame can be to reset with the corresponding speed of per second frame per second of catching.Guiding gesture data such as guiding gesture data such as illustrating on each the right-hand side in Fig. 7 A-7C can be corresponding with the image data frame that on the left side, illustrates.For any given posture, can catch any amount of image data frame and can generate the guiding gesture data frame of any amount of correspondence.Thereby additional image data frame possibly be available with corresponding guiding gesture data.The user can suspend the demonstration that vision helps, rolls, checks, convergent-divergent etc.
The snapshot of the view data on the left side of Fig. 7 B is depicted in the user's at the second time point place posture 64b.On the right side of split screen, this system shows that the vision of the mistake in the posture of giving prominence to explicit user helps.With the guiding gesture data of mistake in the posture of outstanding explicit user abreast the visual representation of the posture of explicit user can help the user to proofread and correct he or her motion in physical space.For example, Fig. 7 B gives prominence to the mistake in the explicit user posture through the position of using arrow points user arm.In this example, vision helps the more good position of this some place user arm of sensing in this posture how correctly to make with the instruction user and colludes fist motion or more successful in boxing game.Similarly, Fig. 7 C describes the snapshot of user's view data, points out this some place of colluding the fist posture last, accomplishes this motion if the user is at the arm that makes him under the situation of higher position, and then he or she will reach better success.Describe to demonstrate user's physical location and the increment between the ideal position side by side.
This system can help the motion of explicit user to give prominence to the mistake in the motion of explicit user with live telecast or real-time vision.Thereby the snapshot of the view data among Fig. 7 A-7C is not the playback of user's posture, and this user can make posture and check that the live vision on the right side of each split screen helps in physical space.For example, guiding gesture data can be provided in interactive practice sessions.The correct location that practice sessions can be demonstrated and colluded each discrete time point place in the fist posture last.At each discrete some place, the user can make posture in physical space, and helps to observe in real time when comparing this posture in this user's posture with the vision that correct postural position is shown.The user can be circulated between each time point, thereby study and the real-time vision carrying out the correct body position at every bit place and receive the ideal position that should be at this moment about the user help.The user can proofread and correct his or her motion by frame.
Although Fig. 7 A-7C describes user's visual representation and the split screen of guiding gesture data shows that having conceived guiding gesture data can provide with any suitable form.For example, be not split screen, the left side that can only show the split screen among Fig. 7 A-7C is with the mistake in the outstanding explicit user posture.In this example, mistake is by the outstanding arrow that is shown as the deviation of directed towards user posture.In another example, or combined with the guiding gesture data of another form, help can be this user wrong or possible correction such as sense of hearings such as offscreen voice with the speech performance.
The directiveness gesture data can be indicated the correct aligning of user in physical space, in the correct visual field of notifying the seizure the visual field how user to move to capture device.For example; If the given pose filter fail because the motion of the limbs that capture of capture device is shifted out outside the visual field in the seizure visual field of capture device for a posture provide consistent result, then guiding gesture data to comprise to notify the user they need in physical space, move sector data or the help that could aim at capture device better.For example; Offscreen voice we can say " please to moving to left "; Or the user can only illustrate a part of representing on the screen at the visual representation on the screen (visual representation shown in 64a, 64b, 64c), thereby the indication user need aim at he or herself again in physical space.
Fig. 8 A and 8B describe to provide to the user additional example of guiding gesture data.In Fig. 8 A, the expression 804 (shown in stick figure is represented) of calibrated posture animation superposes or covers on user's the visual representation.Therefore, the indication of proper exercise covers on user's the view data.Can reset his or her motion and check the guiding gesture data on the visual representation of the posture that covers the user of user.The user can observe user's physical location and the increment between the ideal position subsequently.Arrow just is used to provide an example of the visual representation of user's arms position and the vision increment between helping.Outstanding this increment that shows allows the user to confirm that what modification in the physical space can improve this posture.
Use (such as ongoing recreation) the term of execution, the exercise data can be used as to cover and show, shown in Fig. 8 A.Therefore, interrupting this recreation possibly be unnecessary to get into independent exercise mode.The directiveness gesture data can comprise prompting or little suggestion to user movement, and this prompting can be in recreation express as covering, and making that recreation continues and do not interrupt when guiding gesture data is provided.
The example that vision among Fig. 8 B helps shows it is the visual representation that comprises the guiding gesture data of the demonstration through posture.This demonstration can be broken down into each stage or continuous video montage.The visual representation of directiveness gesture data can be any suitable form, like the form of skeleton representation, ghost image or player's incarnation.
Select exercise mode, detect the user's posture mistake, start the result who uses etc. as the user, this application can trigger the entering exercise mode.For example, during wrong in detecting user's posture, can be provided to chosen wantonly the inlet of exercise mode to the user.This system possibly can not discern the posture that the user makes, and based on the prediction to user's desired movement the exercise suggestion is provided.This system can identify this posture and the suggestion about the better posture that is applicable to this application at this moment is provided, or is provided for practising the suggestion of making this posture of being made by the user better with more successful in this application.
Fig. 9 describes the example of this system can be shown to the user in boxing game option 902a, 902b, 902c, 902d.Display device 16 can show the option of predicting posture, and the user can select option to receive exercise correctly to carry out this posture or to improve the execution of this posture.Possible posture can be any posture that is applicable to system, the application of carrying out based on posture, has the plug-in unit etc. of additional posture.Option 902a shown in Fig. 9,902b, 902c, 902d are applicable to the application that this is being carried out---boxing game.
In this example, the posture of user in physical space is identified as out the fist posture.As stated, gesture recognition engine 190 can comprise the set of posture filter 191.Each filter 191 can comprise definition posture and the parameter of this posture or the information of metadata.Can data of being caught with skeleton pattern and mobile form associated therewith by camera 26,28 and equipment 20 and the posture filter in the gesture library 190 be compared, when carry out one or more postures with identifying user.
In an example embodiment, this systematic collection is also stored specific user's historical data, as with history data store in user profiles.This system can adapt to filter parameter and this user's threshold level based on this historical data.For example, according to research or emulated data, the filter parameter that is used to identify this posture and is the space body that overhand throws can be set as default value.Yet, and generally to compare, user's hand maybe be often far away apart from user's first watch, and therefore exceed for overhand and throw outside the border of the space body that is provided with.At the beginning, this difference can be indicated the executory failure of this posture, is used to get into how exercise mode carries out this posture with study demonstration or option thereby trigger.This system can collect about this user's data in time and revise filter parameter to adapt to this user's tendency.For example, the parameter of definition space body can be modified space body is moved to such an extent that more closely aim at user's body position.Threshold value is accepted level and also can correspondingly be changed.
Practice sessions can be interactively, so that such as coming with the mode of in Fig. 7 A-7C, describing demonstration, correction etc. are assessed, given prominence in user's actual motion.It can be the demonstration of the correct body position that is applicable to that the user of application can implement in physical space that vision helps, and does not have the visual representation of user movement.Fig. 8 B describes the example of the demonstration of correct body position via each motion frame that separates.This demonstration can be given prominence to explicit user should place the position that his or her health correctly carries out this posture.This demonstration can be regardless of user's posture.Perhaps, it can be the result to the assessment of user's posture that vision helps, wherein vision help to point out to be used for this user special-purpose improved specific region.
Be used to show that the other technologies that vision helps are possible.For example, the visual representation of guiding gesture data can be on the part of display space, and wherein this part is less than total display space.Provide the vision of guiding gesture data to help to comprise the coach, like the coach of user's establishment, human coach, prerecord coach etc.The coach can be used as the video feed and ejects.The coach can provide and the corresponding phonetic order of posture of being demonstrated.The user can learn a posture through the visit exerciser, the caber ball such as how.Basically, coach's each posture that the user is rehearsed form to throw.The user can select specific coach, as from the set of the incarnation of representing true or virtual portrait, selecting.Can use different personages, like famous baseball player.Coach's expression can be the pre-recorded data that shows with the posture of being demonstrated.Perhaps, vision help can be and the live mankind coach who shows in another user real time ground.It can be the form of offscreen voice or the live telecast coach who provides posture to demonstrate via network that vision helps.
Figure 10 describes long-range connection and uses the example carry out two mutual users through boxing game.Long-range connection for example can be passed through network.This network can and can be based on the system of subscription by Host Administration.Such connection allows long-range connection of user or networking, so that a plurality of user can carry out via their computing environment separately alternately.
For example, consider that first and second long-distance users are carrying out games application, resist other users as team through long-range connection.First user recognizes that second user is carrying out a certain motion improperly and thereby causing this team in recreation, to fail.First user can provide the demonstration 1002a of posture in real time, makes second user can on the display in second user's the computing environment, observe this vision and helps 1002b.Second user's computing environment can receive the information relevant with the posture of being demonstrated and provide vision to help to second user.Vision helps to offer second user lively, has only postponed to transmit the time that is spent from first user, and has handled and show this information to second user.Can realize suitable technique so that the transmission and the processing time be fast, thereby demonstrate to minimal visual to help second user's delay based on first user's live posture.
The visual representation of user #1 is shown to the result that user #2 can be a plurality of triggerings in real time.For example, user #1 can be from user #2 request indication, and user #2 can identify the mistake in the posture of user #1, and these two users can come mutual etc. from the purpose of learning tactics each other.The user can suspend application and carry out interactive practice sessions.The computing environment of user #1 can explicit user #2 visual representation, thereby the simulated field practice sessions.User #2 can exchange, show motion etc. similarly with user #2.
Figure 11 A describes to be used for posture coach's exemplary operational process.
Operation 1102 has described to receive the data of being caught by capture device 20, the posture that these data are carried out corresponding to the user.Capture device at a distance can be caught the scene that comprises whole users, such as from ground to the ceiling and to the wall of every side in user room of living in.Capture device also can be caught the scene of the part that only comprises the user, the user when being sitting in limit such as he or she more than the belly.Capture device can also be caught the object by user control, is held in the stage property camera in he or her hand such as the user.
Operation 1104 described to analyze data with produce with data whether corresponding to the corresponding output of the posture of system identification.In an embodiment, this analysis can be carried out with the filter that is applied to data by gesture recognizers engine 190 518.
Operation 1106 has been described to confirm that from this output the user has unlikely correctly carried out the posture of system identification.In an embodiment, the posture of system identification comprises and the corresponding posture of filter.
In an embodiment, output comprises level of confidence.In an embodiment, when level of confidence was lower than threshold value, the user had unlikely correctly carried out and the corresponding posture of filter.This threshold value can be the level that the user possibly correctly carry out posture.The user unlikely carried out between the threshold level of posture and possibly have difference when the user possibly correctly carry out the threshold level of posture and be lower than threshold level when being higher than threshold level; Level between those two threshold values neither is considered to possible, also is not considered to unlikely.
Comprise among the embodiment of a plurality of outputs that at the posture filter unlikely correctly carry out posture when corresponding when at least one output with the user, the user unlikely correctly carries out posture.For example, " car steering " filter can comprise the distance that is used between user's both hands, user hand with respect to the position of his other parts of health and the output of the angle that hand tilts.The user makes his hand separate 12-18 ", the front and every the hand that are positioned at him shows with identical driving angle is corresponding when leaving vertical rotation angles and spending, the user might be at the execution vehicle driving posture.The user is his hand and be placed on his front, outwards rotate both hands separately correctly; Make his left hand indication drive left and the indication of his right hand is driven to the right, short of of satisfying in these parameters just is enough to confirm that the user unlikely correctly carries out posture.
Operation 1108 confirms that from exporting the user might want the posture of executive system identification.
Must keep among the embodiment of posture the user; Keep when driving steering position to reach the time period of prolongation through the user; Can confirm that the user might want to carry out posture; Wherein output is corresponding with the posture that just is being performed at least occasionally, but not corresponding with the posture that just is performed the time period that reaches prolongation.
Comprise among the embodiment of a plurality of outputs that at the posture filter carry out posture when corresponding when output with the user, the user might want to carry out this posture.For example; Given above " car steering " filter of discussing might the user have correct hand at interval, and his corresponding hand-screw gyration is corresponding to identical driving direction; But he places the intimate position of loosening of his both sides with both hands, rather than before his body, holds his hand.In this example, two (hand intervals and hand-screw gyration) in the output show that the user might want to make vehicle driving posture, but the 3rd output (hand position) can confirm that the user wants to carry out this posture when not showing.
Operation 1110 has described to provide the help of the posture that relevant user carries out.
In an embodiment, offering help comprises adjustment output, and will send to the corresponding application of carrying out with the user of posture through the output of adjustment.This can be similar to parameter or the tolerance of loosening application.When definite user wants to carry out posture, the output that is used for corresponding posture filter can be changed for the corresponding output of the posture that possibly just be performed, and maintenance customer's intention still.For example, the user shows as and wants steering vehicle rapidly left, but when having incorrect hand position, can be adjusted with the corresponding output of hand position, and still corresponding to vehicle drive (rather than slowly turn left, or turn right) rapidly left.
In an embodiment, adjustment output comprises the response that improves filter.For example, the user can make might be corresponding little mobile with user view, but fail to carry out posture.For vehicle driving posture, no matter how rapidly this can comprise the less amount of hand of when he wants to turn, only rotating him.These move and can be exaggerated.The user rotates his maximum 20 ° of hand, and 90 ° of rotations are during corresponding to the most rapidly turning, and user's actual rotation can be multiply by the factor 4.5, and making his 20 ° of rotations be taken as it is that 90 ° of rotations are treated.
Expect that from actual moving to this mapping of moving can be linear or nonlinear.Might the user approximately correctly carry out very trickle driving campaign, and failed to carry out turning more rapidly.In this case, the response of trickle driving campaign can only slightly be improved, and does not perhaps improve, and wants to pass on the response of the motion of driving rapidly greatly to be improved.
In an embodiment, only definite filter be can controlled filter after, just adjustment output.Some filter can be benefited from adjusting.For example, the user who makes " weapon of opening fire " posture badly and miss his re-set target continually possibly feel depressed to control, and has bad user and experience.Thus, adjustment " weapon of opening fire " filter possibly be useful with the aiming that helps the user, and " weapon of opening fire " filter can be can controlled filter.Yet for " car steering " posture, this can not be true.On traditional sense, the user possibly drive very badly---and he collides or disorbit continually.Yet the certain user likes ignoring the declared target of scene, and intentionally bad the execution.In this situation, in fact adjustment " car steering " filter possibly damage user's experience, because this stops him to do that he wants to do.Here, " car steering " posture can be can not controlled posture.
Can confirm whether filter can be adjusted, for example, through can be by boolean's output of the entity filter that read of adjustment filter, perhaps through with the corresponding data structure of filter that can read by the entity of adjustment filter in Boolean set.
In an embodiment, offer help and comprise and use second filter to replace filter.For example, the user possibly be difficult to carry out the posture that is associated with " expert's driving " filter, and this filter has the lower tolerance to the variation of exemplary motion.This can be determined, and " expert's driving " filter can be replaced by " beginner's driving " filter, and the latter compares " expert's driving " filter variation is had higher tolerance.For example, it will use the output of the output of this new filter filter before replacing the application that can be associated through indication, perhaps places new filter in the position through the filter before removing from the gesture recognition engine and its and replaces filter.
In an embodiment, offer help and comprise and hang up the corresponding application of carrying out with the user of posture.For example, confirm to provide when instructing the user how correctly to carry out the help of posture, the user will be difficult to learn this posture and still mutual with this application.Therefore, this application can be suspended or hung up, and provides and helps to reach certain hour, and such as can as one man carrying out posture up to the user is verified, and after helping session, continue should application.
In an embodiment, offer help to be included on the display device and show output to the user.For example, when output comprises level of confidence, when the user attempts to carry out posture, can make the curve map of level of confidence and time.When he correctly carries out posture, he will see that level of confidence correspondingly improves, and can those moved and correctly carry out posture and be associated.This demonstration can also comprise when to this output be acceptable indication, such as via the flash of light on color change, warning tones or the screen.Output hand distance and the time is made curve map and the hand distance must be positioned at 12 " with 18 " between the time, this curve map can be positioned at 12 in user's hand distance " and 18 " between the time be green, and be red at whole other times.
In an embodiment, the offer help expression of the posture that comprises that explicit user is carried out and the demonstration of posture.This can comprise and shows two postures side by side, makes the user visually identify his and is moving part improperly.In addition, when the user correctly carries out the part of posture and carries out posture a part of improperly, can there be those indications constantly that the user is correctly being carried out, such as through playing sound.In addition, help can comprise the direct guidance about the difference between two postures, is " your both hands must separate 12-18 " such as content.Your both hands show to such an extent that too separate.Try them near some." display on text.
In an embodiment, offer help difference between the demonstration that comprises posture that explicit user is carried out and posture.This can comprise the top that demonstration is superimposed upon the posture of user's execution, makes that difference is tangible.This can comprise the outstanding zone that shows the health that difference exists.In an embodiment, the demonstration of the posture carried out of different ground explicit user and posture so that they identified more independently.For example, one the posture that the user carries out can be shown as user's video or incarnation to be represented when being superimposed on another, and demonstration can be shown as the wire frame incarnation, and vice versa.
In an embodiment, help to be derived from another user.In an embodiment; The help that is derived from is from the set at least one, this set comprise be positioned at or near the help at ranking list top, by the help of highly grading, by its help, second user who has identified as the user who is suitable for helping creating has been designated the help that is suitable for the user, from the user's of the identical culture of user help, from the user's of user's same-language help, from the user's at similar age of user help and from the user's of the similar position of user help.
In an embodiment, offer help and comprise and loosen the tolerance levels that is associated with filter.
Figure 11 B and 11C have described the exemplary architecture that be used for postural training 1150 integrated with gesture recognizers engine 190 and application 1 152.
In Figure 11 B, posture coach 1150 and application 1 152 the two output from each filter 518 each filter 518 of reception.This architecture allows posture to train the output of 1150 monitor filters to confirm whether help is fit to, and exports and allow application 1 152 to receive simultaneously.Alternatively, posture coach 1150 can communicate by letter with application 1 152, such as sending the modified output of being used by application 1 152.
In Figure 11 C, posture coach 1150 receives the output of each filter 518 from those filters 518, and subsequently output is passed to application 1 152.In this architecture, posture coach 1150 can revise any output that has received before output being sent to application 1 152.
Figure 12 A and 12B have described can confirm that from it postural training is suitable example filter output.Each curve map charts with the time to user's hand distance, like the output from " car steering " posture filter.
Figure 12 A describes the exemplary hand distance in unskilled user's the certain hour, this unskilled user the posture that will be performed needed keep roughly unified hand apart from aspect have any problem.Can confirm to help to be fit to from this output.Equally, when hand distance must be positioned at given range, if the user keeps roughly unified hand distance, and if this roughly unified distance is positioned on the given range or under, then still can confirm to help to be fit to.
Figure 12 B describes the exemplary hand distance in the certain hour of skilled user, and this skilled user can be kept roughly unified hand distance as the posture that will be performed is needed.Although the hand distance is not constant, filter can allow this variation, as long as it is enough little.
Conclusion
Although combined each preferred aspect shown in the drawings to describe the present invention, be appreciated that and use other similar aspect or can make amendment or add said aspect and carry out identical function of the present invention and do not break away from the present invention.Therefore, the present invention should not only limit to any single aspect, but should in according to the range of appended claims and scope, explain.For example, various process available hardware described herein or software or both combinations realize.Therefore, the method and apparatus of disclosed each embodiment or its some aspect or part can adopt the form that is included in such as the program code in the tangible mediums such as floppy disk, CD-ROM, hard disk drive or any other machinable medium (that is instruction).When program code is loaded into when carrying out such as machines such as computers and by it, this machine becomes the device that is configured to implement disclosed each embodiment.Except the concrete realization of here clearly setting forth, consider specification disclosed herein, others and realization are with apparent to one skilled in the art.Specification with shown in realize being intended to only be considered to example.

Claims (15)

1. method that is used to provide the help of the posture of carrying out about the user comprises:
The data that reception is caught by capture device (20), the posture (518) (1102) that said data are carried out corresponding to the user;
Analyze said data with produce with said data whether corresponding to the corresponding output of the posture of system identification (1104);
Confirm that from said output said user unlikely correctly carries out the posture (1106) of said system identification;
Confirm that from said output said user might want to carry out the posture of said system identification (1108); And
The help (1110) of the posture of relevant said user's execution is provided.
2. the method for claim 1 is characterized in that, offering help comprises:
Adjust said output; And
To send to and the corresponding application of the posture of said system identification through the output of adjustment.
3. method as claimed in claim 2 is characterized in that, adjusts said output and comprises:
Improve output and correctly carry out the corresponding possibility of posture that said system carries out with the user.
4. method as claimed in claim 2 is characterized in that, also comprises:
Before the said output of adjustment, confirm that said output can be adjusted.
5. the method for claim 1 is characterized in that, said output comprises level of confidence.
6. method as claimed in claim 5 is characterized in that, when said level of confidence was lower than threshold value, the user unlikely correctly carried out the posture of said system identification.
7. whether the method for claim 1 is characterized in that, analyze said data and also comprise corresponding to the corresponding output of the posture of said system identification to produce with data:
Whether also comprise to produce by the said data of filter analysis with said data corresponding to the corresponding output of the posture of said system identification:
Offer help and comprise:
Replace said filter with second filter.
8. the method for claim 1 is characterized in that, offering help comprises:
Hang up the corresponding application of carrying out with said user of posture.
9. the method for claim 1 is characterized in that, offering help comprises:
Show the demonstration of posture of expression and the said system identification of the posture that said user carries out.
10. method as claimed in claim 9 is characterized in that, also comprises:
Show the difference between the demonstration of posture of posture that said user carries out and said system identification.
11. a system that is used to provide the help of the posture of carrying out about the user comprises:
Processor (259);
Reception is by the assembly of the data of camera seizure, the posture (1102) that said data are carried out corresponding to the user;
Analyze said data with produce with said data whether corresponding to the assembly (1104) of the corresponding output of posture of system identification;
Confirm that from said output said user unlikely correctly carries out the assembly (1106) of the posture of said system identification;
Confirm that from said output said user might want to carry out the assembly (1108) of the posture of said system identification; And
The assembly (1110) of the help of the posture of carrying out about said user is provided.
12. system as claimed in claim 11 is characterized in that, the assembly of offering help also comprises:
Loosen the tolerance levels that is associated with the said assembly of analyzing said data.
13. system as claimed in claim 11 is characterized in that, the assembly of offering help also comprises:
Adjust the assembly of said output; And
To send to the assembly of the corresponding application of carrying out with said user of posture through the output of adjustment.
14. system as claimed in claim 13 is characterized in that, the assembly of adjusting said output comprises:
Improve the assembly of the response of the assembly of analyzing said data.
15. system as claimed in claim 11 is characterized in that, said output comprises level of confidence.
CN2010800246590A 2009-05-29 2010-05-25 Gesture coach Active CN102448561B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US12/474,453 US8418085B2 (en) 2009-05-29 2009-05-29 Gesture coach
US12/474,453 2009-05-29
PCT/US2010/036005 WO2010138470A2 (en) 2009-05-29 2010-05-25 Gesture coach

Publications (2)

Publication Number Publication Date
CN102448561A true CN102448561A (en) 2012-05-09
CN102448561B CN102448561B (en) 2013-07-10

Family

ID=43221716

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010800246590A Active CN102448561B (en) 2009-05-29 2010-05-25 Gesture coach

Country Status (5)

Country Link
US (1) US8418085B2 (en)
EP (1) EP2435147A4 (en)
CN (1) CN102448561B (en)
BR (1) BRPI1011193B1 (en)
WO (1) WO2010138470A2 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446569A (en) * 2016-09-29 2017-02-22 宇龙计算机通信科技(深圳)有限公司 Movement guidance method and terminal
CN108293171A (en) * 2015-12-01 2018-07-17 索尼公司 Information processing equipment, information processing method and program
CN109583363A (en) * 2018-11-27 2019-04-05 湖南视觉伟业智能科技有限公司 The method and system of speaker's appearance body movement are improved based on human body critical point detection
CN110992775A (en) * 2019-12-31 2020-04-10 平顶山学院 Percussion music rhythm training device
CN111063167A (en) * 2019-12-25 2020-04-24 歌尔股份有限公司 Fatigue driving recognition prompting method and device and related components
US20220256347A1 (en) * 2021-02-09 2022-08-11 Qualcomm Incorporated Context Dependent V2X Misbehavior Detection

Families Citing this family (155)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9952673B2 (en) * 2009-04-02 2018-04-24 Oblong Industries, Inc. Operating environment comprising multiple client devices, multiple displays, multiple users, and gestural control
US9082117B2 (en) * 2008-05-17 2015-07-14 David H. Chin Gesture based authentication for wireless payment by a mobile electronic device
US9196169B2 (en) 2008-08-21 2015-11-24 Lincoln Global, Inc. Importing and analyzing external data using a virtual reality welding system
US8719714B2 (en) 2009-07-08 2014-05-06 Steelseries Aps Apparatus and method for managing operations of accessories
US9737796B2 (en) 2009-07-08 2017-08-22 Steelseries Aps Apparatus and method for managing operations of accessories in multi-dimensions
US9971807B2 (en) 2009-10-14 2018-05-15 Oblong Industries, Inc. Multi-process interactive systems and methods
US9710154B2 (en) 2010-09-03 2017-07-18 Microsoft Technology Licensing, Llc Dynamic gesture parameters
EP2646948B1 (en) * 2010-09-30 2018-11-14 Orange User interface system and method of operation thereof
EP2635988B1 (en) * 2010-11-05 2020-04-29 NIKE Innovate C.V. Method and system for automated personal training
US9977874B2 (en) 2011-11-07 2018-05-22 Nike, Inc. User interface for remote joint workout session
JP5304774B2 (en) * 2010-12-08 2013-10-02 株式会社Jvcケンウッド Video / audio processing apparatus and video / audio processing method
CN103154856B (en) 2010-12-29 2016-01-06 英派尔科技开发有限公司 For the environmental correclation dynamic range control of gesture identification
US20120192088A1 (en) * 2011-01-20 2012-07-26 Avaya Inc. Method and system for physical mapping in a virtual world
US9224166B2 (en) 2011-03-08 2015-12-29 Bank Of America Corporation Retrieving product information from embedded sensors via mobile device video analysis
US9317860B2 (en) 2011-03-08 2016-04-19 Bank Of America Corporation Collective network of augmented reality users
US8873807B2 (en) 2011-03-08 2014-10-28 Bank Of America Corporation Vehicle recognition
US8922657B2 (en) 2011-03-08 2014-12-30 Bank Of America Corporation Real-time video image analysis for providing security
US9773285B2 (en) 2011-03-08 2017-09-26 Bank Of America Corporation Providing data associated with relationships between individuals and images
US20120231840A1 (en) * 2011-03-08 2012-09-13 Bank Of America Corporation Providing information regarding sports movements
US9317835B2 (en) 2011-03-08 2016-04-19 Bank Of America Corporation Populating budgets and/or wish lists using real-time video image analysis
US8721337B2 (en) 2011-03-08 2014-05-13 Bank Of America Corporation Real-time video image analysis for providing virtual landscaping
US8718612B2 (en) 2011-03-08 2014-05-06 Bank Of American Corporation Real-time analysis involving real estate listings
US20120242793A1 (en) * 2011-03-21 2012-09-27 Soungmin Im Display device and method of controlling the same
US8620113B2 (en) 2011-04-25 2013-12-31 Microsoft Corporation Laser diode modes
US10120561B2 (en) * 2011-05-05 2018-11-06 Lenovo (Singapore) Pte. Ltd. Maximum speed criterion for a velocity gesture
JP5408188B2 (en) * 2011-05-19 2014-02-05 コニカミノルタ株式会社 CONFERENCE SYSTEM, CONFERENCE MANAGEMENT DEVICE, CONFERENCE MANAGEMENT METHOD, AND PROGRAM
EP3042704B1 (en) 2011-05-23 2019-03-06 Lego A/S A toy construction system
CN103702726B (en) * 2011-05-23 2016-01-13 乐高公司 Toy is built system, is produced the method and data handling system that build instruction
US20120304059A1 (en) * 2011-05-24 2012-11-29 Microsoft Corporation Interactive Build Instructions
US8740702B2 (en) 2011-05-31 2014-06-03 Microsoft Corporation Action trigger gesturing
US8657683B2 (en) 2011-05-31 2014-02-25 Microsoft Corporation Action selection gesturing
US8760395B2 (en) 2011-05-31 2014-06-24 Microsoft Corporation Gesture recognition techniques
US8845431B2 (en) 2011-05-31 2014-09-30 Microsoft Corporation Shape trace gesturing
US9266019B2 (en) 2011-07-01 2016-02-23 Empire Technology Development Llc Safety scheme for gesture-based game
CN103827891B (en) * 2011-07-28 2018-01-09 Arb实验室公司 Use the system and method for the multi-dimensional gesture Data Detection body kinematics of whole world generation
WO2013022222A2 (en) * 2011-08-05 2013-02-14 Samsung Electronics Co., Ltd. Method for controlling electronic apparatus based on motion recognition, and electronic apparatus applying the same
EP2986014A1 (en) 2011-08-05 2016-02-17 Samsung Electronics Co., Ltd. Method for controlling electronic apparatus based on voice recognition and motion recognition, and electronic apparatus applying the same
US9390318B2 (en) 2011-08-31 2016-07-12 Empire Technology Development Llc Position-setup for gesture-based game system
WO2013039586A1 (en) * 2011-09-16 2013-03-21 Landmark Graphics Corporation Methods and systems for gesture-based petrotechnical application control
US20130097565A1 (en) * 2011-10-17 2013-04-18 Microsoft Corporation Learning validation using gesture recognition
US9628843B2 (en) * 2011-11-21 2017-04-18 Microsoft Technology Licensing, Llc Methods for controlling electronic devices using gestures
KR101567591B1 (en) 2011-12-02 2015-11-20 엠파이어 테크놀로지 디벨롭먼트 엘엘씨 Safety scheme for gesture-based game system
US8635637B2 (en) 2011-12-02 2014-01-21 Microsoft Corporation User interface presenting an animated avatar performing a media reaction
US9100685B2 (en) 2011-12-09 2015-08-04 Microsoft Technology Licensing, Llc Determining audience state or interest using passive sensor data
US8693731B2 (en) 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging
US9501152B2 (en) 2013-01-15 2016-11-22 Leap Motion, Inc. Free-space user interface and control using virtual constructs
US9070019B2 (en) 2012-01-17 2015-06-30 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US8638989B2 (en) 2012-01-17 2014-01-28 Leap Motion, Inc. Systems and methods for capturing motion in three-dimensional space
US10691219B2 (en) 2012-01-17 2020-06-23 Ultrahaptics IP Two Limited Systems and methods for machine control
US9679215B2 (en) 2012-01-17 2017-06-13 Leap Motion, Inc. Systems and methods for machine control
US11493998B2 (en) 2012-01-17 2022-11-08 Ultrahaptics IP Two Limited Systems and methods for machine control
US8898687B2 (en) 2012-04-04 2014-11-25 Microsoft Corporation Controlling a media program based on a media reaction
CA2775700C (en) 2012-05-04 2013-07-23 Microsoft Corporation Determining a future portion of a currently presented media program
US20140006944A1 (en) * 2012-07-02 2014-01-02 Microsoft Corporation Visual UI Guide Triggered by User Actions
US9606647B1 (en) * 2012-07-24 2017-03-28 Palantir Technologies, Inc. Gesture management system
US9285893B2 (en) 2012-11-08 2016-03-15 Leap Motion, Inc. Object detection and tracking with variable-field illumination devices
US10609285B2 (en) 2013-01-07 2020-03-31 Ultrahaptics IP Two Limited Power consumption in motion-capture systems
US9465461B2 (en) 2013-01-08 2016-10-11 Leap Motion, Inc. Object detection and tracking with audio and optical signals
US9459697B2 (en) 2013-01-15 2016-10-04 Leap Motion, Inc. Dynamic, free-space user interactions for machine control
US10042510B2 (en) 2013-01-15 2018-08-07 Leap Motion, Inc. Dynamic user interactions for display control and measuring degree of completeness of user gestures
US10134267B2 (en) 2013-02-22 2018-11-20 Universal City Studios Llc System and method for tracking a passive wand and actuating an effect based on a detected wand path
US9393695B2 (en) 2013-02-27 2016-07-19 Rockwell Automation Technologies, Inc. Recognition-based industrial automation control with person and object discrimination
US9798302B2 (en) 2013-02-27 2017-10-24 Rockwell Automation Technologies, Inc. Recognition-based industrial automation control with redundant system input support
US9804576B2 (en) 2013-02-27 2017-10-31 Rockwell Automation Technologies, Inc. Recognition-based industrial automation control with position and derivative decision reference
US9498885B2 (en) 2013-02-27 2016-11-22 Rockwell Automation Technologies, Inc. Recognition-based industrial automation control with confidence-based decision support
US20160049089A1 (en) * 2013-03-13 2016-02-18 James Witt Method and apparatus for teaching repetitive kinesthetic motion
US9389779B2 (en) * 2013-03-14 2016-07-12 Intel Corporation Depth-based user interface gesture control
US20140267611A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Runtime engine for analyzing user motion in 3d images
US9702977B2 (en) 2013-03-15 2017-07-11 Leap Motion, Inc. Determining positional information of an object in space
US8930897B2 (en) 2013-03-15 2015-01-06 Palantir Technologies Inc. Data integration tool
US9687730B2 (en) 2013-03-15 2017-06-27 Steelseries Aps Gaming device with independent gesture-sensitive areas
US9415299B2 (en) 2013-03-15 2016-08-16 Steelseries Aps Gaming device
US9409087B2 (en) 2013-03-15 2016-08-09 Steelseries Aps Method and apparatus for processing gestures
US9604147B2 (en) 2013-03-15 2017-03-28 Steelseries Aps Method and apparatus for managing use of an accessory
US9423874B2 (en) 2013-03-15 2016-08-23 Steelseries Aps Gaming accessory with sensory feedback device
US8903717B2 (en) 2013-03-15 2014-12-02 Palantir Technologies Inc. Method and system for generating a parser and parsing complex data
US10803762B2 (en) * 2013-04-02 2020-10-13 Nec Solution Innovators, Ltd Body-motion assessment device, dance assessment device, karaoke device, and game device
TWI524213B (en) * 2013-04-02 2016-03-01 宏達國際電子股份有限公司 Controlling method and electronic apparatus
US10620709B2 (en) 2013-04-05 2020-04-14 Ultrahaptics IP Two Limited Customized gesture interpretation
US9749541B2 (en) * 2013-04-16 2017-08-29 Tout Inc. Method and apparatus for displaying and recording images using multiple image capturing devices integrated into a single mobile device
US9916009B2 (en) 2013-04-26 2018-03-13 Leap Motion, Inc. Non-tactile interface systems and methods
US9747696B2 (en) 2013-05-17 2017-08-29 Leap Motion, Inc. Systems and methods for providing normalized parameters of motions of objects in three-dimensional space
US9681186B2 (en) 2013-06-11 2017-06-13 Nokia Technologies Oy Method, apparatus and computer program product for gathering and presenting emotional response to an event
US9873038B2 (en) 2013-06-14 2018-01-23 Intercontinental Great Brands Llc Interactive electronic games based on chewing motion
US10281987B1 (en) 2013-08-09 2019-05-07 Leap Motion, Inc. Systems and methods of free-space gestural interaction
US9721383B1 (en) 2013-08-29 2017-08-01 Leap Motion, Inc. Predictive information for free space gesture control and communication
US9632572B2 (en) 2013-10-03 2017-04-25 Leap Motion, Inc. Enhanced field of view to augment three-dimensional (3D) sensory space for free-space gesture interpretation
TWI537767B (en) * 2013-10-04 2016-06-11 財團法人工業技術研究院 System and method of multi-user coaching inside a tunable motion-sensing range
US10220304B2 (en) * 2013-10-14 2019-03-05 Microsoft Technology Licensing, Llc Boolean/float controller and gesture recognition system
US9996638B1 (en) 2013-10-31 2018-06-12 Leap Motion, Inc. Predictive information for free space gesture control and communication
US9535505B2 (en) * 2013-11-08 2017-01-03 Polar Electro Oy User interface control in portable system
EP3068301A4 (en) 2013-11-12 2017-07-12 Highland Instruments, Inc. Analysis suite
WO2015073973A1 (en) 2013-11-17 2015-05-21 Team Sport IP, LLC System and method to assist in player development
IN2013MU04097A (en) * 2013-12-27 2015-08-07 Tata Consultancy Services Ltd
US9613262B2 (en) 2014-01-15 2017-04-04 Leap Motion, Inc. Object detection and tracking for providing a virtual device experience
US9911351B2 (en) * 2014-02-27 2018-03-06 Microsoft Technology Licensing, Llc Tracking objects during processes
US10613642B2 (en) * 2014-03-12 2020-04-07 Microsoft Technology Licensing, Llc Gesture parameter tuning
JP2015186531A (en) * 2014-03-26 2015-10-29 国立大学法人 東京大学 Action information processing device and program
US10338684B2 (en) 2014-03-26 2019-07-02 Intel Corporation Mechanism to enhance user experience of mobile devices through complex inputs from external displays
US10466657B2 (en) 2014-04-03 2019-11-05 Honda Motor Co., Ltd. Systems and methods for global adaptation of an implicit gesture control system
US9342797B2 (en) 2014-04-03 2016-05-17 Honda Motor Co., Ltd. Systems and methods for the detection of implicit gestures
US10409382B2 (en) 2014-04-03 2019-09-10 Honda Motor Co., Ltd. Smart tutorial for gesture control system
US10207193B2 (en) 2014-05-21 2019-02-19 Universal City Studios Llc Optical tracking system for automation of amusement park elements
US10061058B2 (en) 2014-05-21 2018-08-28 Universal City Studios Llc Tracking system and method for use in surveying amusement park equipment
US9600999B2 (en) 2014-05-21 2017-03-21 Universal City Studios Llc Amusement park element tracking system
US9616350B2 (en) 2014-05-21 2017-04-11 Universal City Studios Llc Enhanced interactivity in an amusement park environment using passive tracking elements
US10025990B2 (en) 2014-05-21 2018-07-17 Universal City Studios Llc System and method for tracking vehicles in parking structures and intersections
US9433870B2 (en) 2014-05-21 2016-09-06 Universal City Studios Llc Ride vehicle tracking and control system using passive tracking elements
US9429398B2 (en) 2014-05-21 2016-08-30 Universal City Studios Llc Optical tracking for controlling pyrotechnic show elements
CN204480228U (en) 2014-08-08 2015-07-15 厉动公司 motion sensing and imaging device
US9529605B2 (en) 2014-08-27 2016-12-27 Microsoft Technology Licensing, Llc Customizing user interface indicators based on prior interactions
US9607573B2 (en) 2014-09-17 2017-03-28 International Business Machines Corporation Avatar motion modification
US10238979B2 (en) 2014-09-26 2019-03-26 Universal City Sudios LLC Video game ride
KR101936532B1 (en) * 2014-10-10 2019-04-03 후지쯔 가부시끼가이샤 Storage medium, skill determination method, and skill determination device
US9229952B1 (en) 2014-11-05 2016-01-05 Palantir Technologies, Inc. History preserving data pipeline system and method
CN115048007B (en) * 2014-12-31 2024-05-07 创新先进技术有限公司 Device and method for adjusting interface operation icon distribution range and touch screen device
US9804696B2 (en) * 2015-01-02 2017-10-31 Microsoft Technology Licensing, Llc User-input control device toggled motion tracking
US10613637B2 (en) * 2015-01-28 2020-04-07 Medtronic, Inc. Systems and methods for mitigating gesture input error
US11347316B2 (en) * 2015-01-28 2022-05-31 Medtronic, Inc. Systems and methods for mitigating gesture input error
US9977565B2 (en) 2015-02-09 2018-05-22 Leapfrog Enterprises, Inc. Interactive educational system with light emitting controller
US9996595B2 (en) 2015-08-03 2018-06-12 Palantir Technologies, Inc. Providing full data provenance visualization for versioned datasets
US9576015B1 (en) 2015-09-09 2017-02-21 Palantir Technologies, Inc. Domain-specific language for dataset transformations
WO2017124285A1 (en) * 2016-01-18 2017-07-27 曹晟 Method for posture correction according to height information in sitting posture, and smart office desk
WO2017124283A1 (en) * 2016-01-18 2017-07-27 曹晟 Data acquisition method during sitting posture correction and smart office desk
JP6702746B2 (en) * 2016-02-10 2020-06-03 キヤノン株式会社 Imaging device, control method thereof, program, and storage medium
US9600717B1 (en) * 2016-02-25 2017-03-21 Zepp Labs, Inc. Real-time single-view action recognition based on key pose analysis for sports videos
US10129126B2 (en) 2016-06-08 2018-11-13 Bank Of America Corporation System for predictive usage of resources
US10581988B2 (en) 2016-06-08 2020-03-03 Bank Of America Corporation System for predictive use of resources
US10433196B2 (en) 2016-06-08 2019-10-01 Bank Of America Corporation System for tracking resource allocation/usage
US10291487B2 (en) 2016-06-08 2019-05-14 Bank Of America Corporation System for predictive acquisition and use of resources
US10178101B2 (en) 2016-06-08 2019-01-08 Bank Of America Corporation System for creation of alternative path to resource acquisition
US10007674B2 (en) 2016-06-13 2018-06-26 Palantir Technologies Inc. Data revision control in large-scale data analytic systems
US10102229B2 (en) 2016-11-09 2018-10-16 Palantir Technologies Inc. Validating data integrations using a secondary data store
JP6787072B2 (en) * 2016-11-21 2020-11-18 カシオ計算機株式会社 Image processing equipment, analysis system, image processing method and program
US9946777B1 (en) 2016-12-19 2018-04-17 Palantir Technologies Inc. Systems and methods for facilitating data transformation
US9922108B1 (en) 2017-01-05 2018-03-20 Palantir Technologies Inc. Systems and methods for facilitating data transformation
US10671602B2 (en) 2017-05-09 2020-06-02 Microsoft Technology Licensing, Llc Random factoid generation
US10956406B2 (en) 2017-06-12 2021-03-23 Palantir Technologies Inc. Propagated deletion of database records and derived data
US10691729B2 (en) 2017-07-07 2020-06-23 Palantir Technologies Inc. Systems and methods for providing an object platform for a relational database
WO2019008771A1 (en) * 2017-07-07 2019-01-10 りか 高木 Guidance process management system for treatment and/or exercise, and program, computer device and method for managing guidance process for treatment and/or exercise
US10956508B2 (en) 2017-11-10 2021-03-23 Palantir Technologies Inc. Systems and methods for creating and managing a data integration workspace containing automatically updated data models
US10592735B2 (en) * 2018-02-12 2020-03-17 Cisco Technology, Inc. Collaboration event content sharing
US10754822B1 (en) 2018-04-18 2020-08-25 Palantir Technologies Inc. Systems and methods for ontology migration
US11461355B1 (en) 2018-05-15 2022-10-04 Palantir Technologies Inc. Ontological mapping of data
US11875012B2 (en) 2018-05-25 2024-01-16 Ultrahaptics IP Two Limited Throwable interface for augmented reality and virtual reality environments
US11288733B2 (en) * 2018-11-14 2022-03-29 Mastercard International Incorporated Interactive 3D image projection systems and methods
WO2020251385A1 (en) * 2019-06-14 2020-12-17 Ringcentral, Inc., (A Delaware Corporation) System and method for capturing presentation gestures
US11150923B2 (en) * 2019-09-16 2021-10-19 Samsung Electronics Co., Ltd. Electronic apparatus and method for providing manual thereof
CN112486317B (en) * 2020-11-26 2022-08-09 湖北鼎森智能科技有限公司 Digital reading method and system based on gestures
US20220230079A1 (en) * 2021-01-21 2022-07-21 Microsoft Technology Licensing, Llc Action recognition
US11579704B2 (en) * 2021-03-24 2023-02-14 Meta Platforms Technologies, Llc Systems and methods for adaptive input thresholding
US11726553B2 (en) 2021-07-20 2023-08-15 Sony Interactive Entertainment LLC Movement-based navigation
US11786816B2 (en) 2021-07-30 2023-10-17 Sony Interactive Entertainment LLC Sharing movement data
US20230051703A1 (en) * 2021-08-16 2023-02-16 Sony Interactive Entertainment LLC Gesture-Based Skill Search
CN117519487B (en) * 2024-01-05 2024-03-22 安徽建筑大学 Development machine control teaching auxiliary training system based on vision dynamic capture

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731316A (en) * 2005-08-19 2006-02-08 北京航空航天大学 Human-computer interaction method for dummy ape game
CN1764931A (en) * 2003-02-11 2006-04-26 索尼电脑娱乐公司 Method and apparatus for real time motion capture
US20070066393A1 (en) * 1998-08-10 2007-03-22 Cybernet Systems Corporation Real-time head tracking system for computer games and other applications
CN101202994A (en) * 2006-12-14 2008-06-18 北京三星通信技术研究有限公司 Method and device assistant to user for body-building

Family Cites Families (221)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4288078A (en) * 1979-11-20 1981-09-08 Lugo Julio I Game apparatus
US4695953A (en) 1983-08-25 1987-09-22 Blair Preston E TV animation interactively controlled by the viewer
US4630910A (en) 1984-02-16 1986-12-23 Robotic Vision Systems, Inc. Method of measuring in three-dimensions at high speed
US4627620A (en) 1984-12-26 1986-12-09 Yang John P Electronic athlete trainer for improving skills in reflex, speed and accuracy
US4645458A (en) 1985-04-15 1987-02-24 Harald Phillip Athletic evaluation and training apparatus
US4702475A (en) 1985-08-16 1987-10-27 Innovating Training Products, Inc. Sports technique and reaction training system
US4843568A (en) 1986-04-11 1989-06-27 Krueger Myron W Real time perception of and response to the actions of an unencumbered participant/user
US4711543A (en) 1986-04-14 1987-12-08 Blair Preston E TV animation interactively controlled by the viewer
US4796997A (en) 1986-05-27 1989-01-10 Synthetic Vision Systems, Inc. Method and system for high-speed, 3-D imaging of an object at a vision station
US5184295A (en) 1986-05-30 1993-02-02 Mann Ralph V System and method for teaching physical skills
US4751642A (en) 1986-08-29 1988-06-14 Silva John M Interactive sports simulation system with physiological sensing and psychological conditioning
US4809065A (en) 1986-12-01 1989-02-28 Kabushiki Kaisha Toshiba Interactive system and related method for displaying data to produce a three-dimensional image of an object
US4817950A (en) 1987-05-08 1989-04-04 Goo Paul E Video game control unit and attitude sensor
US5239463A (en) 1988-08-04 1993-08-24 Blair Preston E Method and apparatus for player interaction with animated characters and objects
US5239464A (en) 1988-08-04 1993-08-24 Blair Preston E Interactive video system providing repeated switching of multiple tracks of actions sequences
US4901362A (en) 1988-08-08 1990-02-13 Raytheon Company Method of recognizing patterns
US4893183A (en) 1988-08-11 1990-01-09 Carnegie-Mellon University Robotic vision system
JPH02199526A (en) 1988-10-14 1990-08-07 David G Capper Control interface apparatus
US4925189A (en) 1989-01-13 1990-05-15 Braeunig Thomas F Body-mounted video game exercise device
US5229756A (en) 1989-02-07 1993-07-20 Yamaha Corporation Image control apparatus
US5469740A (en) 1989-07-14 1995-11-28 Impulse Technology, Inc. Interactive video testing and training system
JPH03103822U (en) 1990-02-13 1991-10-29
US5101444A (en) 1990-05-18 1992-03-31 Panacea, Inc. Method and apparatus for high speed object location
US5148154A (en) 1990-12-04 1992-09-15 Sony Corporation Of America Multi-dimensional user interface
US5534917A (en) 1991-05-09 1996-07-09 Very Vivid, Inc. Video image based control system
AU656781B2 (en) 1991-05-30 1995-02-16 Richard John Baker Personalized instructional aid
KR0130552B1 (en) 1991-05-30 1998-04-10 리챠드 존 베이커 Personalized insturction aid
US5417210A (en) 1992-05-27 1995-05-23 International Business Machines Corporation System and method for augmentation of endoscopic surgery
US5295491A (en) 1991-09-26 1994-03-22 Sam Technology, Inc. Non-invasive human neurocognitive performance capability testing method and system
US6054991A (en) 1991-12-02 2000-04-25 Texas Instruments Incorporated Method of modeling player position and movement in a virtual reality system
CA2101633A1 (en) 1991-12-03 1993-06-04 Barry J. French Interactive video testing and training system
US5875108A (en) 1991-12-23 1999-02-23 Hoffberg; Steven M. Ergonomic man-machine interface incorporating adaptive pattern recognition based control system
JPH07325934A (en) 1992-07-10 1995-12-12 Walt Disney Co:The Method and equipment for provision of graphics enhanced to virtual world
US5999908A (en) 1992-08-06 1999-12-07 Abelow; Daniel H. Customer-based product design module
US5320538A (en) 1992-09-23 1994-06-14 Hughes Training, Inc. Interactive aircraft training system and method
IT1257294B (en) 1992-11-20 1996-01-12 DEVICE SUITABLE TO DETECT THE CONFIGURATION OF A PHYSIOLOGICAL-DISTAL UNIT, TO BE USED IN PARTICULAR AS AN ADVANCED INTERFACE FOR MACHINES AND CALCULATORS.
US5495576A (en) 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
US5690582A (en) 1993-02-02 1997-11-25 Tectrix Fitness Equipment, Inc. Interactive exercise apparatus
JP2799126B2 (en) 1993-03-26 1998-09-17 株式会社ナムコ Video game device and game input device
US5405152A (en) 1993-06-08 1995-04-11 The Walt Disney Company Method and apparatus for an interactive video game with physical feedback
US5454043A (en) 1993-07-30 1995-09-26 Mitsubishi Electric Research Laboratories, Inc. Dynamic and static hand gesture recognition through low-level image analysis
US5423554A (en) 1993-09-24 1995-06-13 Metamedia Ventures, Inc. Virtual reality game method and apparatus
US5980256A (en) 1993-10-29 1999-11-09 Carmein; David E. E. Virtual reality system with enhanced sensory apparatus
JP3419050B2 (en) 1993-11-19 2003-06-23 株式会社日立製作所 Input device
US5347306A (en) 1993-12-17 1994-09-13 Mitsubishi Electric Research Laboratories, Inc. Animated electronic meeting place
JP2552427B2 (en) 1993-12-28 1996-11-13 コナミ株式会社 Tv play system
US5577981A (en) 1994-01-19 1996-11-26 Jarvik; Robert Virtual reality exercise machine and computer controlled video system
US5580249A (en) 1994-02-14 1996-12-03 Sarcos Group Apparatus for simulating mobility of a human
US5597309A (en) 1994-03-28 1997-01-28 Riess; Thomas Method and apparatus for treatment of gait problems associated with parkinson's disease
US5385519A (en) 1994-04-19 1995-01-31 Hsu; Chi-Hsueh Running machine
US5524637A (en) 1994-06-29 1996-06-11 Erickson; Jon W. Interactive system for measuring physiological exertion
US5563988A (en) 1994-08-01 1996-10-08 Massachusetts Institute Of Technology Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment
US6714665B1 (en) 1994-09-02 2004-03-30 Sarnoff Corporation Fully automated iris recognition system utilizing wide and narrow fields of view
US5516105A (en) 1994-10-06 1996-05-14 Exergame, Inc. Acceleration activated joystick
US5638300A (en) 1994-12-05 1997-06-10 Johnson; Lee E. Golf swing analysis system
JPH08161292A (en) 1994-12-09 1996-06-21 Matsushita Electric Ind Co Ltd Method and system for detecting congestion degree
US5594469A (en) 1995-02-21 1997-01-14 Mitsubishi Electric Information Technology Center America Inc. Hand gesture machine control system
US5682229A (en) 1995-04-14 1997-10-28 Schwartz Electro-Optics, Inc. Laser range camera
US5913727A (en) 1995-06-02 1999-06-22 Ahdoot; Ned Interactive movement and contact simulation game
US6229913B1 (en) 1995-06-07 2001-05-08 The Trustees Of Columbia University In The City Of New York Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus
IL114278A (en) 1995-06-22 2010-06-16 Microsoft Internat Holdings B Camera and method
US5682196A (en) 1995-06-22 1997-10-28 Actv, Inc. Three-dimensional (3D) video presentation system providing interactive 3D presentation with personalized audio responses for multiple viewers
JPH11508359A (en) 1995-06-22 1999-07-21 3ディブイ・システムズ・リミテッド Improved optical ranging camera
US5702323A (en) 1995-07-26 1997-12-30 Poulton; Craig K. Electronic exercise enhancer
US6308565B1 (en) 1995-11-06 2001-10-30 Impulse Technology Ltd. System and method for tracking and assessing movement skills in multidimensional space
US6098458A (en) 1995-11-06 2000-08-08 Impulse Technology, Ltd. Testing and training system for assessing movement and agility skills without a confining field
US6430997B1 (en) 1995-11-06 2002-08-13 Trazer Technologies, Inc. System and method for tracking and assessing movement skills in multidimensional space
US6073489A (en) 1995-11-06 2000-06-13 French; Barry J. Testing and training system for assessing the ability of a player to complete a task
US6176782B1 (en) 1997-12-22 2001-01-23 Philips Electronics North America Corp. Motion-based command generation technology
US5933125A (en) 1995-11-27 1999-08-03 Cae Electronics, Ltd. Method and apparatus for reducing instability in the display of a virtual environment
US5641288A (en) 1996-01-11 1997-06-24 Zaenglein, Jr.; William G. Shooting simulating process and training device using a virtual reality display screen
US6152856A (en) 1996-05-08 2000-11-28 Real Vision Corporation Real time simulation using position sensing
US6173066B1 (en) 1996-05-21 2001-01-09 Cybernet Systems Corporation Pose determination and tracking by matching 3D objects to a 2D sensor
US5989157A (en) 1996-08-06 1999-11-23 Walton; Charles A. Exercising system with electronic inertial game playing
CN1168057C (en) 1996-08-14 2004-09-22 挪拉赫梅特·挪利斯拉莫维奇·拉都包夫 Method for following and imaging a subject's three-dimensional position and orientation, method for presenting a virtual space to a subject,and systems for implementing said methods
JP3064928B2 (en) 1996-09-20 2000-07-12 日本電気株式会社 Subject extraction method
DE69626208T2 (en) 1996-12-20 2003-11-13 Hitachi Europ Ltd Method and system for recognizing hand gestures
US5904484A (en) * 1996-12-23 1999-05-18 Burns; Dave Interactive motion training device and method
US6009210A (en) 1997-03-05 1999-12-28 Digital Equipment Corporation Hands-free interface to a virtual reality environment using head tracking
US6100896A (en) 1997-03-24 2000-08-08 Mitsubishi Electric Information Technology Center America, Inc. System for designing graphical multi-participant environments
US5877803A (en) 1997-04-07 1999-03-02 Tritech Mircoelectronics International, Ltd. 3-D image detector
US6215898B1 (en) 1997-04-15 2001-04-10 Interval Research Corporation Data processing system and method
US6075895A (en) 1997-06-20 2000-06-13 Holoplex Methods and apparatus for gesture recognition based on templates
JP3077745B2 (en) 1997-07-31 2000-08-14 日本電気株式会社 Data processing method and apparatus, information storage medium
US6188777B1 (en) 1997-08-01 2001-02-13 Interval Research Corporation Method and apparatus for personnel detection and tracking
US6289112B1 (en) 1997-08-22 2001-09-11 International Business Machines Corporation System and method for determining block direction in fingerprint images
US6720949B1 (en) 1997-08-22 2004-04-13 Timothy R. Pryor Man machine interfaces and applications
AUPO894497A0 (en) 1997-09-02 1997-09-25 Xenotech Research Pty Ltd Image processing method and apparatus
WO1999015863A1 (en) 1997-09-24 1999-04-01 3Dv Systems, Ltd. Acoustical imaging system
EP0905644A3 (en) 1997-09-26 2004-02-25 Matsushita Electric Industrial Co., Ltd. Hand gesture recognizing device
US6141463A (en) 1997-10-10 2000-10-31 Electric Planet Interactive Method and system for estimating jointed-figure configurations
US6101289A (en) 1997-10-15 2000-08-08 Electric Planet, Inc. Method and apparatus for unencumbered capture of an object
US6130677A (en) 1997-10-15 2000-10-10 Electric Planet, Inc. Interactive computer vision system
AU9808298A (en) 1997-10-15 1999-05-03 Electric Planet, Inc. A system and method for generating an animatable character
US6072494A (en) 1997-10-15 2000-06-06 Electric Planet, Inc. Method and apparatus for real-time gesture recognition
WO1999019828A1 (en) 1997-10-15 1999-04-22 Electric Planet, Inc. Method and apparatus for performing a clean background subtraction
US6181343B1 (en) 1997-12-23 2001-01-30 Philips Electronics North America Corp. System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
US6159100A (en) 1998-04-23 2000-12-12 Smith; Michael D. Virtual reality game
US6077201A (en) 1998-06-12 2000-06-20 Cheng; Chau-Yang Exercise bicycle
US6801637B2 (en) 1999-08-10 2004-10-05 Cybernet Systems Corporation Optical body tracker
US7036094B1 (en) 1998-08-10 2006-04-25 Cybernet Systems Corporation Behavior recognition system
US6681031B2 (en) 1998-08-10 2004-01-20 Cybernet Systems Corporation Gesture-controlled interfaces for self-service machines and other applications
US20010008561A1 (en) 1999-08-10 2001-07-19 Paul George V. Real-time object tracking system
US6950534B2 (en) 1998-08-10 2005-09-27 Cybernet Systems Corporation Gesture-controlled interfaces for self-service machines and other applications
IL126284A (en) 1998-09-17 2002-12-01 Netmor Ltd System and method for three dimensional positioning and tracking
EP0991011B1 (en) 1998-09-28 2007-07-25 Matsushita Electric Industrial Co., Ltd. Method and device for segmenting hand gestures
US6501515B1 (en) 1998-10-13 2002-12-31 Sony Corporation Remote control system
AU1930700A (en) 1998-12-04 2000-06-26 Interval Research Corporation Background estimation and segmentation based on range and color
US6147678A (en) 1998-12-09 2000-11-14 Lucent Technologies Inc. Video hand image-three-dimensional computer interface with multiple degrees of freedom
AU1574899A (en) 1998-12-16 2000-07-03 3Dv Systems Ltd. Self gating photosurface
US6570555B1 (en) 1998-12-30 2003-05-27 Fuji Xerox Co., Ltd. Method and apparatus for embodied conversational characters with multimodal input/output in an interface device
US6363160B1 (en) 1999-01-22 2002-03-26 Intel Corporation Interface using pattern recognition and tracking
US7003134B1 (en) 1999-03-08 2006-02-21 Vulcan Patents Llc Three dimensional object pose estimation which employs dense depth information
US6299308B1 (en) 1999-04-02 2001-10-09 Cybernet Systems Corporation Low-cost non-imaging eye tracker system for computer control
US6614422B1 (en) 1999-11-04 2003-09-02 Canesta, Inc. Method and apparatus for entering data using a virtual input device
US6503195B1 (en) 1999-05-24 2003-01-07 University Of North Carolina At Chapel Hill Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction
US6476834B1 (en) 1999-05-28 2002-11-05 International Business Machines Corporation Dynamic creation of selectable items on surfaces
US6873723B1 (en) 1999-06-30 2005-03-29 Intel Corporation Segmenting three-dimensional video images using stereo
US6738066B1 (en) 1999-07-30 2004-05-18 Electric Plant, Inc. System, method and article of manufacture for detecting collisions between video images generated by a camera and an object depicted on a display
US7113918B1 (en) 1999-08-01 2006-09-26 Electric Planet, Inc. Method for video enabled electronic commerce
US6514081B1 (en) * 1999-08-06 2003-02-04 Jeffrey L. Mengoli Method and apparatus for automating motion analysis
US7050606B2 (en) 1999-08-10 2006-05-23 Cybernet Systems Corporation Tracking and gesture recognition system particularly suited to vehicular control applications
US7224384B1 (en) 1999-09-08 2007-05-29 3Dv Systems Ltd. 3D imaging system
US6512838B1 (en) 1999-09-22 2003-01-28 Canesta, Inc. Methods for enhancing performance and data acquired from three-dimensional image systems
US7006236B2 (en) 2002-05-22 2006-02-28 Canesta, Inc. Method and apparatus for approximating depth of an object's placement onto a monitored region with applications to virtual interface devices
US20030021032A1 (en) 2001-06-22 2003-01-30 Cyrus Bamji Method and system to display a virtual input device
US7050177B2 (en) 2002-05-22 2006-05-23 Canesta, Inc. Method and apparatus for approximating depth of an object's placement onto a monitored region with applications to virtual interface devices
US6690618B2 (en) 2001-04-03 2004-02-10 Canesta, Inc. Method and apparatus for approximating a source position of a sound-causing event for determining an input used in operating an electronic device
US20030132950A1 (en) 2001-11-27 2003-07-17 Fahri Surucu Detecting, classifying, and interpreting input events based on stimuli in multiple sensory domains
DE19960180B4 (en) 1999-12-14 2006-03-09 Rheinmetall W & M Gmbh Method for producing an explosive projectile
US6674877B1 (en) 2000-02-03 2004-01-06 Microsoft Corporation System and method for visually tracking occluded objects in real time
US6663491B2 (en) 2000-02-18 2003-12-16 Namco Ltd. Game apparatus, storage medium and computer program that adjust tempo of sound
US6633294B1 (en) 2000-03-09 2003-10-14 Seth Rosenthal Method and apparatus for using captured high density motion for animation
EP1152261A1 (en) 2000-04-28 2001-11-07 CSEM Centre Suisse d'Electronique et de Microtechnique SA Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves
US6640202B1 (en) 2000-05-25 2003-10-28 International Business Machines Corporation Elastic sensor mesh system for 3-dimensional measurement, mapping and kinematics applications
US6731799B1 (en) 2000-06-01 2004-05-04 University Of Washington Object segmentation with background extraction and moving boundary techniques
US6788809B1 (en) 2000-06-30 2004-09-07 Intel Corporation System and method for gesture recognition in three dimensions using stereo imaging and color vision
US7227526B2 (en) 2000-07-24 2007-06-05 Gesturetek, Inc. Video-based image control system
US7058204B2 (en) 2000-10-03 2006-06-06 Gesturetek, Inc. Multiple camera control system
JP3725460B2 (en) 2000-10-06 2005-12-14 株式会社ソニー・コンピュータエンタテインメント Image processing apparatus, image processing method, recording medium, computer program, semiconductor device
US7039676B1 (en) 2000-10-31 2006-05-02 International Business Machines Corporation Using video image analysis to automatically transmit gestures over a network in a chat or instant messaging session
US6539931B2 (en) 2001-04-16 2003-04-01 Koninklijke Philips Electronics N.V. Ball throwing assistant
US8035612B2 (en) 2002-05-28 2011-10-11 Intellectual Ventures Holding 67 Llc Self-contained interactive video display system
US7259747B2 (en) 2001-06-05 2007-08-21 Reactrix Systems, Inc. Interactive video display system
JP3420221B2 (en) 2001-06-29 2003-06-23 株式会社コナミコンピュータエンタテインメント東京 GAME DEVICE AND PROGRAM
WO2003015056A2 (en) 2001-08-09 2003-02-20 Visual Interaction Gmbh Automated behavioral and cognitive profiling for training and marketing segmentation
US6937742B2 (en) 2001-09-28 2005-08-30 Bellsouth Intellectual Property Corporation Gesture activated home appliance
WO2003054683A2 (en) 2001-12-07 2003-07-03 Canesta Inc. User interface for electronic devices
US7340077B2 (en) 2002-02-15 2008-03-04 Canesta, Inc. Gesture recognition system using depth perceptive sensors
US20030169906A1 (en) 2002-02-26 2003-09-11 Gokturk Salih Burak Method and apparatus for recognizing objects
US7310431B2 (en) 2002-04-10 2007-12-18 Canesta, Inc. Optical methods for remotely measuring objects
ATE321689T1 (en) 2002-04-19 2006-04-15 Iee Sarl SAFETY DEVICE FOR A VEHICLE
US7170492B2 (en) 2002-05-28 2007-01-30 Reactrix Systems, Inc. Interactive video display system
US7710391B2 (en) 2002-05-28 2010-05-04 Matthew Bell Processing an image utilizing a spatially varying pattern
US7348963B2 (en) 2002-05-28 2008-03-25 Reactrix Systems, Inc. Interactive video display system
US7489812B2 (en) 2002-06-07 2009-02-10 Dynamic Digital Depth Research Pty Ltd. Conversion and encoding techniques
US7646372B2 (en) 2003-09-15 2010-01-12 Sony Computer Entertainment Inc. Methods and systems for enabling direction detection when interfacing with a computer program
US7623115B2 (en) 2002-07-27 2009-11-24 Sony Computer Entertainment Inc. Method and apparatus for light input device
US7883415B2 (en) 2003-09-15 2011-02-08 Sony Computer Entertainment Inc. Method and apparatus for adjusting a view of a scene being displayed according to tracked head motion
US7151530B2 (en) 2002-08-20 2006-12-19 Canesta, Inc. System and method for determining an input selected by a user through a virtual interface
US7576727B2 (en) 2002-12-13 2009-08-18 Matthew Bell Interactive directed light/sound system
JP4235729B2 (en) 2003-02-03 2009-03-11 国立大学法人静岡大学 Distance image sensor
US8745541B2 (en) * 2003-03-25 2014-06-03 Microsoft Corporation Architecture for controlling a computer using hand gestures
DE602004006190T8 (en) 2003-03-31 2008-04-10 Honda Motor Co., Ltd. Device, method and program for gesture recognition
WO2004107266A1 (en) 2003-05-29 2004-12-09 Honda Motor Co., Ltd. Visual tracking using depth data
US8072470B2 (en) 2003-05-29 2011-12-06 Sony Computer Entertainment Inc. System and method for providing a real-time three-dimensional interactive environment
JP4546956B2 (en) 2003-06-12 2010-09-22 本田技研工業株式会社 Target orientation estimation using depth detection
US7874917B2 (en) 2003-09-15 2011-01-25 Sony Computer Entertainment Inc. Methods and systems for enabling depth and direction detection when interfacing with a computer program
US7536032B2 (en) 2003-10-24 2009-05-19 Reactrix Systems, Inc. Method and system for processing captured image information in an interactive video display system
JP3847753B2 (en) 2004-01-30 2006-11-22 株式会社ソニー・コンピュータエンタテインメント Image processing apparatus, image processing method, recording medium, computer program, semiconductor device
US20050215319A1 (en) 2004-03-23 2005-09-29 Harmonix Music Systems, Inc. Method and apparatus for controlling a three-dimensional character in a three-dimensional gaming environment
CN100573548C (en) 2004-04-15 2009-12-23 格斯图尔泰克股份有限公司 The method and apparatus of tracking bimanual movements
US7308112B2 (en) 2004-05-14 2007-12-11 Honda Motor Co., Ltd. Sign based human-machine interaction
US7704135B2 (en) 2004-08-23 2010-04-27 Harrison Jr Shelton E Integrated game system, method, and device
US7991220B2 (en) 2004-09-01 2011-08-02 Sony Computer Entertainment Inc. Augmented reality game system using identification information to display a virtual object in association with a position of a real object
EP1645944B1 (en) 2004-10-05 2012-08-15 Sony France S.A. A content-management interface
JP4449723B2 (en) 2004-12-08 2010-04-14 ソニー株式会社 Image processing apparatus, image processing method, and program
KR20060070280A (en) 2004-12-20 2006-06-23 한국전자통신연구원 Apparatus and its method of user interface using hand gesture recognition
HUE049974T2 (en) 2005-01-07 2020-11-30 Qualcomm Inc Detecting and tracking objects in images
CN101137996A (en) 2005-01-07 2008-03-05 格斯图尔泰克股份有限公司 Optical flow based tilt sensor
WO2006074310A2 (en) 2005-01-07 2006-07-13 Gesturetek, Inc. Creating 3d images of objects by illuminating with infrared patterns
EP1851750A4 (en) 2005-02-08 2010-08-25 Oblong Ind Inc System and method for genture based control system
US8009871B2 (en) 2005-02-08 2011-08-30 Microsoft Corporation Method and system to segment depth images and to detect shapes in three-dimensionally acquired data
KR100688743B1 (en) 2005-03-11 2007-03-02 삼성전기주식회사 Manufacturing method of PCB having multilayer embedded passive-chips
JP4686595B2 (en) 2005-03-17 2011-05-25 本田技研工業株式会社 Pose estimation based on critical point analysis
US8147248B2 (en) 2005-03-21 2012-04-03 Microsoft Corporation Gesture training
EP1886509B1 (en) 2005-05-17 2017-01-18 Qualcomm Incorporated Orientation-sensitive signal output
EP1752748B1 (en) 2005-08-12 2008-10-29 MESA Imaging AG Highly sensitive, fast pixel for use in an image sensor
US20080026838A1 (en) 2005-08-22 2008-01-31 Dunstan James E Multi-player non-role-playing virtual world games: method for two-way interaction between participants and multi-player virtual world games
US7450736B2 (en) 2005-10-28 2008-11-11 Honda Motor Co., Ltd. Monocular tracking of 3D human motion with a coordinated mixture of factor analyzers
GB2431717A (en) 2005-10-31 2007-05-02 Sony Uk Ltd Scene analysis
JP4917615B2 (en) 2006-02-27 2012-04-18 プライム センス リミティド Range mapping using uncorrelated speckle
US8766983B2 (en) 2006-05-07 2014-07-01 Sony Computer Entertainment Inc. Methods and systems for processing an interchange of real time effects during video communication
US7721207B2 (en) 2006-05-31 2010-05-18 Sony Ericsson Mobile Communications Ab Camera based control
US7701439B2 (en) 2006-07-13 2010-04-20 Northrop Grumman Corporation Gesture recognition simulation system and method
US7725547B2 (en) * 2006-09-06 2010-05-25 International Business Machines Corporation Informing a user of gestures made by others out of the user's line of sight
US8395658B2 (en) 2006-09-07 2013-03-12 Sony Computer Entertainment Inc. Touch screen-like user interface that does not require actual touching
JP5395323B2 (en) 2006-09-29 2014-01-22 ブレインビジョン株式会社 Solid-state image sensor
US20080124690A1 (en) 2006-11-28 2008-05-29 Attune Interactive, Inc. Training system using an interactive prompt character
US20080134102A1 (en) 2006-12-05 2008-06-05 Sony Ericsson Mobile Communications Ab Method and system for detecting movement of an object
US8351646B2 (en) 2006-12-21 2013-01-08 Honda Motor Co., Ltd. Human pose estimation and tracking using label assignment
US7412077B2 (en) 2006-12-29 2008-08-12 Motorola, Inc. Apparatus and methods for head pose estimation and head gesture detection
US9311528B2 (en) * 2007-01-03 2016-04-12 Apple Inc. Gesture learning
US7840031B2 (en) * 2007-01-12 2010-11-23 International Business Machines Corporation Tracking a range of body movement based on 3D captured image streams of a user
US7770136B2 (en) 2007-01-24 2010-08-03 Microsoft Corporation Gesture recognition interactive feedback
US8144148B2 (en) * 2007-02-08 2012-03-27 Edge 3 Technologies Llc Method and system for vision-based interaction in a virtual environment
GB0703974D0 (en) 2007-03-01 2007-04-11 Sony Comp Entertainment Europe Entertainment device
US7729530B2 (en) 2007-03-03 2010-06-01 Sergey Antonov Method and apparatus for 3-D data input to a personal computer with a multimedia oriented operating system
US20080234023A1 (en) * 2007-03-23 2008-09-25 Ajmal Mullahkhel Light game
US8475172B2 (en) * 2007-07-19 2013-07-02 Massachusetts Institute Of Technology Motor learning and rehabilitation using tactile feedback
US7852262B2 (en) 2007-08-16 2010-12-14 Cybernet Systems Corporation Wireless mobile indoor/outdoor tracking system
EP2180926B1 (en) * 2007-08-22 2011-04-13 Koninklijke Philips Electronics N.V. System and method for displaying selected information to a person undertaking exercises
US7970176B2 (en) 2007-10-02 2011-06-28 Omek Interactive, Inc. Method and system for gesture classification
US9292092B2 (en) 2007-10-30 2016-03-22 Hewlett-Packard Development Company, L.P. Interactive display system with collaborative gesture detection
US20090221368A1 (en) 2007-11-28 2009-09-03 Ailive Inc., Method and system for creating a shared game space for a networked game
GB2455316B (en) 2007-12-04 2012-08-15 Sony Corp Image processing apparatus and method
US8149210B2 (en) 2007-12-31 2012-04-03 Microsoft International Holdings B.V. Pointing device and method
CN201254344Y (en) 2008-08-20 2009-06-10 中国农业科学院草原研究所 Plant specimens and seed storage
US9399167B2 (en) 2008-10-14 2016-07-26 Microsoft Technology Licensing, Llc Virtual space mapping of a variable activity region
US9377857B2 (en) * 2009-05-01 2016-06-28 Microsoft Technology Licensing, Llc Show body position
US9417700B2 (en) * 2009-05-21 2016-08-16 Edge3 Technologies Gesture recognition systems and related methods

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070066393A1 (en) * 1998-08-10 2007-03-22 Cybernet Systems Corporation Real-time head tracking system for computer games and other applications
CN1764931A (en) * 2003-02-11 2006-04-26 索尼电脑娱乐公司 Method and apparatus for real time motion capture
CN1731316A (en) * 2005-08-19 2006-02-08 北京航空航天大学 Human-computer interaction method for dummy ape game
CN101202994A (en) * 2006-12-14 2008-06-18 北京三星通信技术研究有限公司 Method and device assistant to user for body-building

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108293171A (en) * 2015-12-01 2018-07-17 索尼公司 Information processing equipment, information processing method and program
CN108293171B (en) * 2015-12-01 2020-12-04 索尼公司 Information processing apparatus, information processing method, and storage medium
CN106446569A (en) * 2016-09-29 2017-02-22 宇龙计算机通信科技(深圳)有限公司 Movement guidance method and terminal
CN109583363A (en) * 2018-11-27 2019-04-05 湖南视觉伟业智能科技有限公司 The method and system of speaker's appearance body movement are improved based on human body critical point detection
CN111063167A (en) * 2019-12-25 2020-04-24 歌尔股份有限公司 Fatigue driving recognition prompting method and device and related components
CN110992775A (en) * 2019-12-31 2020-04-10 平顶山学院 Percussion music rhythm training device
US20220256347A1 (en) * 2021-02-09 2022-08-11 Qualcomm Incorporated Context Dependent V2X Misbehavior Detection

Also Published As

Publication number Publication date
WO2010138470A2 (en) 2010-12-02
CN102448561B (en) 2013-07-10
US20100306712A1 (en) 2010-12-02
EP2435147A2 (en) 2012-04-04
BRPI1011193A2 (en) 2016-03-15
WO2010138470A3 (en) 2011-03-10
EP2435147A4 (en) 2016-12-07
BRPI1011193B1 (en) 2019-12-03
US8418085B2 (en) 2013-04-09

Similar Documents

Publication Publication Date Title
CN102448561B (en) Gesture coach
CN102413886B (en) Show body position
US9824480B2 (en) Chaining animations
CN102301315B (en) Gesture recognizer system architecture
CN102449576B (en) Gesture shortcuts
CN102301311B (en) Standard gestures
US8451278B2 (en) Determine intended motions
CN102356373B (en) Virtual object manipulation
EP2585896B1 (en) User tracking feedback
CN102947774B (en) For driving natural user's input of interactive fiction
CN102207771A (en) Intention deduction of users participating in motion capture system
US20100306716A1 (en) Extending standard gestures
CN102448566A (en) Gestures beyond skeletal
CN102473320A (en) Bringing a visual representation to life via learned input from the user
CN102129293A (en) Tracking groups of users in motion capture system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: MICROSOFT TECHNOLOGY LICENSING LLC

Free format text: FORMER OWNER: MICROSOFT CORP.

Effective date: 20150505

C41 Transfer of patent application or patent right or utility model
TR01 Transfer of patent right

Effective date of registration: 20150505

Address after: Washington State

Patentee after: Micro soft technique license Co., Ltd

Address before: Washington State

Patentee before: Microsoft Corp.