CN102184009A - Hand position post processing refinement in tracking system - Google Patents
Hand position post processing refinement in tracking system Download PDFInfo
- Publication number
- CN102184009A CN102184009A CN2011101128672A CN201110112867A CN102184009A CN 102184009 A CN102184009 A CN 102184009A CN 2011101128672 A CN2011101128672 A CN 2011101128672A CN 201110112867 A CN201110112867 A CN 201110112867A CN 102184009 A CN102184009 A CN 102184009A
- Authority
- CN
- China
- Prior art keywords
- estimation
- hand
- point
- difference
- threshold value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a hand position post processing refinement in a tracking system. The body of a user is tracked by the tracking system equipped with a depth camera in a physical space so as to obtain the model of the body and the initial estimation of the hand position of the user is also performed. When the initial estimation moveing from frame to frame is less than a threshold level, the execution time is smooth. When the movement exceeds the threshold value, the execution is performed less smoothly or not smoothly. The smooth estimation is used to define the local part of the body for searching the end of the hand so as to define the new position of the hand. The stabilized points of the upper body used as reliable reference positions are generated by detection and occlusion detection and the arm vectors are defined by the points of the upper body and the above estimated hand position. A search is performed along the vector so as to detect the end of the hand and define the new position of the hand.
Description
Technical field
The application relates to motion capture system and method, relates in particular to the hand position aftertreatment refining in the tracker.
Background technology
Motion capture system obtains position in physical space and mobile data about people or other main bodys, and can use the input of these data as a certain application in the computing system.Have many application, as for military affairs, amusement, physical culture and medical purpose.For example, people's motion can be mapped to three-dimensional (3-D) human skeleton's model and be used to create animated character or incarnation.Motion capture system can comprise comprise use visible and invisible (for example, infrared) light system in interior optical system, motion capture system uses camera to detect the existence of the people in the visual field.Yet, following the tracks of the further refining of man-hour needs with higher fidelity.Particularly wish to follow the tracks of people's hand with the fidelity of height.
Summary of the invention
Provide a kind of being used for to follow the tracks of method, motion capture system and the storage of tangible computer-readable that the processor of user's hand is realized with the fidelity of improving in motion capture system.For example, the user's posture that can do to sell come navigation menu, browse or do shopping mutual in experiencing, select the recreation that will play or visit such as send communication feature such as message to friend.The user can use hand to control cursor with from the single option of on-screen menu, perhaps controls incarnation moving in the 3-D virtual world.In general, hand position can be tracked and with the control input of doing the application in the motion capture system.
In order to strengthen the ability that motion capture system identifies hand position exactly, some different technology are provided.These technology are initial with the initial estimation of hand position generally, and refining should be estimated.Solved such as shake, limited camera resolution, camera noise and the problems such as body part that are blocked.
In one embodiment, provide and be used for following the tracks of the method that processor that the user moves is realized in motion capture system.This method comprises the hand of following the tracks of the user in time in the visual field of motion capture system, is included in the 3-D depth image that different time points obtains hand.The 3-D depth image can be used for providing for example skeleton pattern of user's health.This method also comprises the initial estimation that obtains the position of hand in the visual field based on described tracking.This initial estimation can be provided by the motion tracking system of any kind.The initial estimation of position may be because can be inaccurate a little by the error (comprising noise, shake and employed track algorithm) of motion tracking system introducing.This method also comprises determines the corresponding estimated differences of initial estimation with respect to last time point, and determines that whether this difference is less than threshold value.This threshold value definable is with the estimation of last time point 2-D area or the 3-D body as the center.If this difference is less than threshold value, then initial estimation is used smoothing processing, with by changing the current estimation that initial estimation provides the position according to amount less than this difference.
On the other hand,, make it be not less than threshold value, the current estimation of position then can be provided according to initial estimation in fact if this difference is relatively large.In the case, do not use smooth effect.The big frame that this technology makes hand minimized to the stand-by period that frame moves, simultaneously level and smooth less moving.Based on current estimation, definition one is individual as the search body, as cuboid (comprising cube) or spheroid in the visual field.Search 3-D depth image is determined the new estimation of the position of hand in the visual field in this body.This search can be included in the position of sign hand in the body, and the mean value of definite position.This method also comprises location-based at least in part new estimation or comes the application of the hand to represent the visual field that control input is provided from the value that the new estimation of position draws.This control input can be used for moving of navigation menu, control incarnation or the like.
The selected notion of this summary to further describe in the following description with the reduced form introduction is provided.This general introduction is not intended to identify the key feature or the essential feature of theme required for protection, is not intended to be used to limit the scope of theme required for protection yet.
Description of drawings
In the accompanying drawings, the element that label is identical corresponds to each other.
Fig. 1 has described the example embodiment of motion capture system.
Fig. 2 has described the example block diagram of the motion capture system of Fig. 1.
Fig. 3 has described the example block diagram of the computing environment that can use in the motion capture system of Fig. 1.
Fig. 4 has described another example block diagram of the computing environment that can use in the motion capture system of Fig. 1.
Fig. 5 has described to be used for following the tracks of with the fidelity of improving in motion capture system the method for user's hand.
Fig. 6 has described to be used for as following the tracks of individual's the exemplary method that moves as described in the step 500 of Fig. 5.
Fig. 7 A has described to be used for as upgrading the exemplary method of hand position as described in the step 504 of Fig. 5.
Fig. 7 B has described to be used for as carrying out level and smooth exemplary method as described in the step 700 of Fig. 7 A.
Fig. 7 C has described to be used for as carrying out another level and smooth exemplary method as described in the step 700 of Fig. 7 A.
Fig. 7 D has described to be used for as upgrading the exemplary method of hand position as described in the step 504 of Fig. 5.
Fig. 7 E has described to be used for the exemplary method as the reference point of ground stable model as described in the step 732 of Fig. 7 D.
Fig. 7 F has described to be used for as upgrading the exemplary method of hand position as described in the step 504 of Fig. 5.
Fig. 8 has described the example model as the described user of step 608 of Fig. 6.
When Fig. 9 A has described to be used for difference between initial estimation and last estimation less than threshold value as carrying out level and smooth example technique as described in the step 700 of Fig. 7 A.
When Fig. 9 B has described to be used for difference between initial estimation and last estimation more than or equal to threshold value as carrying out level and smooth example technique as described in the step 700 of Fig. 7 A.
Figure 10 has described to be used for as the example technique of the new estimation of hand position is provided as described in the step 704 and 706 of Fig. 7 A.
Figure 11 A has described as defining at least one vectorial example as described in the step 734 of Fig. 7 D.
Figure 11 B has described as searching for the example of arm end as described in the step 736 of Fig. 7 D.
Figure 11 C described as described in the step 736 of Fig. 7 D to the example of position candidate scoring.
Figure 12 A has described as the step 750 of Fig. 7 E described, wherein the exemplary elevation views of the user's that is blocked of the reference point of health model.
Figure 12 B has described the side view of the model of Figure 12 A.
Figure 12 C has described the projection camera image view of the model of Figure 12 A.
Figure 12 D has described the vertical view of the 3-D model of Figure 12 A.
Embodiment
The technology that is used for identifying more accurately at motion tracking system the position of hand is provided.These technology can be extended to other body parts or the non-body part object tracking such as pin or the head.In general, degree of depth camera system can be followed the tracks of moving of user's body in the physical space, and derives the model of this health, and this model is for each camera frame per second several times ground renewal.Yet, need come the hand of identifying user with the fidelity of height usually.But, for the tracker of whole body volume tracing optimization may lack the ability of following the tracks of hand with sufficiently high accuracy.This type systematic can provide the conjecture to the rough and potentially unstable of hand position.Here the technology refining that provides the initial estimation of the hand position that can generate by the human tracker in outside.These technology comprise post-processing step, regional area in the post-processing step analysis depth image, generation can be used as the stable upper body point of reliable reference position, searches for the hand end in depth image, and comes the execution time level and smooth in the mode that minimizes the appreciable stand-by period.
Fig. 1 has described wherein individual 8 and the example embodiment of using mutual motion capture system 10.This shows motion capture system such as the real world in user family and disposes.Motion capture system 10 comprises display 196, degree of depth camera system 20 and computing environment or installs 12.Degree of depth camera system 20 can comprise image camera assembly 22, and it has infrared (IR) optical transmitting set 24, infrared camera 26 and R-G-B (RGB) camera 28.The user 8 who is also referred to as individual or player stands in the visual field 6 of degree of depth camera.The border in the line 2 and the 4 expression visuals field 6.In this example, degree of depth camera system 20 and computing environment 12 provide incarnation 197 on the display 196 wherein to follow the tracks of users 8 the application of moving.For example, when the user lifted arm, incarnation can be lifted arm.Incarnation 197 stands on the road 198 in the 3-D virtual world.Definable Descartes world coordinate system, it comprises z axle, vertically extending y axle and the horizontal and horizontally extending x axle that extends along the focal length of degree of depth camera system 20 (for example level).Notice that the perspective of accompanying drawing is modified to reduced representation, display 196 vertically extends on the y direction of principal axis, and the z axle extends out from degree of depth camera system abreast perpendicular to y axle and x axle and with ground level that user 8 is stood.
Generally speaking, motion capture system 10 is used for identification, analyzes and/or follow the tracks of human target.Computing environment 12 can comprise computing machine, games system or control desk etc., and carries out nextport hardware component NextPort and/or the component software of using.
Degree of depth camera system 20 can comprise camera, camera is used for visually monitoring the one or more people such as user's 8 grades, thereby can catch, analyze and follow the tracks of the performed posture of user and/or mobile, carry out one or more controls or action in the application, as character activities on incarnation or the screen is got up or select a menu item in the user interface (UI).
User 8 can use degree of depth camera system 20 to follow the tracks of, and makes user's posture and/or move be captured and is used to make character activities on incarnation or the screen, and/or be interpreted as input control to the performed application of computer environment 12.
Some of user 8 moves that can be interpreted as can be corresponding to the control except that the action of control the incarnation.For example, in one embodiment, the player can use to move and finish, suspends or preserve recreation, select rank, check high score, exchange with friend etc.The player can use to move and select recreation or other application from main user interface, or navigation options menu otherwise.Thus, the motion of user 8 gamut can obtain in any suitable manner, uses and analyze to carry out alternately with application.
The individual can with use when mutual grasping such as objects such as stage properties.In this type of embodiment, individual and object mobile can be used for the control application.For example, can follow the tracks of and utilize the player's of hand-held racket motion to control racket on the screen in the application of simulation tennis game.In another example embodiment, can follow the tracks of and utilize hand-held player's such as toy weapons such as plastic swords motion to control corresponding weapon in the virtual world of the application that sea rover is provided.
Fig. 2 has described the example block diagram of the motion capture system 10 of Fig. 1 a.Degree of depth camera system 20 can be configured to via any suitable technique, comprises for example flight time, structured light, stereo-picture etc., catches the video that has the depth information that comprises depth image, and this depth information can comprise depth value.Degree of depth camera system 20 can be organized as depth information " Z layer ", can the layer vertical with the Z axle that extends along its sight line from degree of depth camera.
Degree of depth camera system 20 can comprise image camera assembly 22, as the degree of depth camera of the depth image of catching the scene in the physical space.Depth image can comprise two dimension (2-D) pixel region of the scene of being caught, and wherein each pixel in this 2-D pixel region has the depth value that is associated of the linear range of representing range image photomoduel 22.
ToF analysis also can be used for to determine from degree of depth camera system 20 to target indirectly or the physical distance of the ad-hoc location on the object by analyzing folded light beam intensity in time via various technology such as comprising for example fast gate-type light pulse imaging.
In another example embodiment, but degree of depth camera system 20 utilization structure light are caught depth information.In this analysis, patterning light (that is, be shown as such as known pattern such as lattice or candy strips light) can be projected on the scene via for example IR optical transmitting set 24.When one or more targets in striking scene or object surfaces, in response, the pattern deformable.This distortion of pattern can be caught by for example infrared camera 26 and/or RGB camera 28, then can be analyzed to determine from degree of depth camera system to target or the physical distance of the ad-hoc location on the object.
Degree of depth camera system 20 can comprise the camera that two or more physically separate, and these cameras can be checked scene from different perspectives to obtain the vision stereo data, and this vision stereo data can be resolved to generate depth information.
Degree of depth camera system 20 also can comprise microphone 30, and microphone 30 comprises transducer or the sensor that for example receives sound wave and convert thereof into electric signal.In addition, microphone 30 can be used for receiving by the individual provide such as sound signals such as sound, control application by computing environment 12 operations.Sound signal can comprise the voice such as individual such as the word of saying, whistle, cry and other language, and such as unvoiced sound such as clapping hands or stamp one's foot.
Degree of depth camera system 20 can comprise the processor 32 that communicates with image camera assembly 22.Processor 32 can comprise standardization device, application specific processor, microprocessor of executable instruction etc., and these instructions comprise the instruction that for example is used to receive depth image; Be used for generating the instruction of voxel grid based on depth image; Be used for removing the background that is included in the voxel grid so that isolate the instruction of the one or more voxels that are associated with human target; Be used for the position of one or more epiphysis of the human target of determine isolating or the instruction of location; Be used for coming the instruction of adjustment model based on the position of one or more epiphysis or location; Or any other suitable instruction, these will be described in more detail below.
Degree of depth camera system 20 also can comprise memory assembly 34, and memory assembly 34 can be stored image that the instruction that can be carried out by processor 32 and storage 3-D camera or RGB camera caught or picture frame or any other appropriate information, image or the like.According to an example embodiment, memory assembly 34 can comprise random-access memory (ram), ROM (read-only memory) (ROM), high-speed cache, flash memory, hard disk or any other suitable tangible computer-readable memory module.Memory assembly 34 can be the independent assembly that communicates via bus 21 and image capture assemblies 22 and processor 32.According to another embodiment, memory assembly 34 can be integrated in processor 32 and/or the image capture assemblies 22.
Degree of depth camera system 20 can communicate via communication link 36 and computing environment 12.Communication link 36 can be wired and/or wireless connections.According to an embodiment, computing environment 12 can provide clock signal to degree of depth camera system 20 via communication link 36, and when the indication of this signal catches view data from the physical space in the visual field that is arranged in degree of depth camera system 20.
In addition, degree of depth camera system 20 can provide the depth information and the image of being caught by for example 3-D camera 26 and/or RGB camera 28 to computing environment 12 via communication link 36, and/or can be by the skeleton pattern of degree of depth camera system 20 generations.Computing environment 12 can use the image of this model, depth information and seizure to control application then.For example, as shown in Figure 2, computing environment 12 can comprise such as gesture library 190 such as posture filtrator set, and each posture filtrator has the information about the posture that can be carried out by skeleton pattern (when the user moves).For example, can be various hand postures the posture filtrator is provided, hitting or throwing as hand.By detected motion and each filtrator are compared, can identify the individual appointment posture of carrying out or mobile.Also can determine to carry out mobile scope.
Can compare identifying user (represented) when to carry out one or more specific moving the data of the skeleton pattern form of catching and moving of being associated with it and posture filtrator in the gesture library 190 by degree of depth camera system 20 as skeleton pattern.Those move and can be associated with the various control commands of using.
Computing environment also can comprise and is used for carrying out the instruction that is stored in storer 194 the audio-video output signal to be provided to display device 196 and to realize the processor 192 of other functions as described herein.
Fig. 3 has described the example block diagram of the computing environment that can use in the motion capture system of Fig. 1.Computing environment can be used for explaining that one or more postures or other move and come visual space on the refresh display in response.Above-describedly can comprise such as multimedia consoles such as game console 100 such as computing environment such as computing environment 12.Multimedia console 100 comprise have on-chip cache 102, the CPU (central processing unit) (CPU) 101 of second level cache 104 and flash rom (ROM (read-only memory)) 106.Therefore on-chip cache 102 and second level cache 104 temporary storaging datas also reduce number of memory access cycles, improve processing speed and handling capacity thus.CPU 101 can be arranged to have more than one nuclear, and additional firsts and seconds high-speed cache 102 and 104 thus.The executable code that during the starting stage of bootup process, loads in the time of can being stored in multimedia console 100 energisings such as storeies such as flash rom 106.
The Video processing streamline that Graphics Processing Unit (GPU) 108 and video encoder/video codec (encoder/decoder) 114 are formed at a high speed, high graphics is handled.Data are transported to video encoder/video codec 114 via bus from Graphics Processing Unit 108.The Video processing streamline outputs to A/V (audio/video) port one 40 to be transferred to televisor or other displays with data.Memory Controller 110 is connected to GPU 108 so that the various types of storeies 112 of processor access, such as RAM (random access memory).
Provide system storage 143 to be stored in the application data that loads during the boot process.Provide media drive 144 and its can comprise DVD/CD driver, hard disk drive or other removable media driver.Media drive 144 can be internal or external for multimedia console 100.Application data can be via media drive 144 visit, with by multimedia console 100 execution, playback etc.Media drive 144 is connected to I/O controller 120 via bus such as connect at a high speed such as serial ATA bus or other.
System Management Controller 122 provides the various service functions that relate to the availability of guaranteeing multimedia console 100.Audio treatment unit 123 and audio codec 132 form the corresponding audio with high fidelity and stereo processing and handle streamline.Voice data transmits between audio treatment unit 123 and audio codec 132 via communication link.The Audio Processing streamline outputs to A/V port one 40 with data and reproduces for external audio player or equipment with audio capability.
Front panel I/O subassembly 130 supports to be exposed to the power knob 150 on the outside surface of multimedia console 100 and the function of ejector button 152 and any LED (light emitting diode) or other indicator.System's supply module 136 is to the assembly power supply of multimedia console 100.Circuit in the fan 138 cooling multimedia consoles 100.
Each other assembly in CPU 101, GPU 108, Memory Controller 110 and the multimedia console 100 is via one or more bus interconnection, comprises serial and parallel bus, memory bus, peripheral bus and uses in the various bus architectures any processor or local bus.
When multimedia console 100 energisings, application data can be loaded into storer 112 and/or the high-speed cache 102,104 and at CPU 101 from system storage 143 and carry out.Application can be presented on the graphic user interface of the user experience that provides consistent when navigating to different media types available on the multimedia console 100.In operation, the application that comprises in the media drive 144 and/or other medium can start or broadcast from media drive 144, to provide additional function to multimedia console 100.
When multimedia console 100 energisings, the hardware resource that keeps specified amount is done system's use for multimedia console operating system.These resources can comprise that storer keeps that (for example, 16MB), CPU and GPU cycle (for example, 5%), the network bandwidth are (for example, 8kbs) etc.Because these resources keep when system bootstrap, so institute's resources reserved is non-existent for application.
Particularly, storer keeps preferably enough big, starts kernel, concurrent system application and driver to comprise.It preferably is constant that CPU keeps, and makes that then idle thread will consume any untapped cycle if the CPU consumption that is kept is not used by system applies.
Keep for GPU, interrupt showing the lightweight messages (for example, pop-up window) that generates by system applies, pop-up window is rendered as coverage diagram with the scheduling code by use GPU.The required amount of memory of coverage diagram depends on the overlay area size, and coverage diagram preferably with the proportional convergent-divergent of screen resolution.Use under the situation of using complete user interface the preferred resolution that is independent of application resolution of using at concurrent system.Scaler can be used for being provided with this resolution, thereby need not to change frequency, also just can not cause that TV is synchronous again.
After multimedia console 100 guiding and system resource are retained, provide systemic-function with regard to the execution concurrence system applies.Systemic-function is encapsulated in one group of system applies of carrying out in the above-mentioned system resource that keeps.Operating system nucleus sign is system applies thread but not the thread of recreation The Application of Thread.System applies preferably is scheduled as at the fixed time and moves on CPU 101 with predetermined time interval, so that the system resource view of unanimity is provided for application.Dispatch is in order to minimize used caused high-speed cache division by the recreation that moves on control desk.
When concurrent system application need audio frequency, then because time sensitivity and asynchronous schedule Audio Processing use for recreation.Multimedia console application manager (as described below) is controlled the audio level (for example, quiet, decay) that recreation is used when the system applies activity.
Input equipment (for example, controller 142 (1) and 142 (2)) is used by recreation and system applies is shared.Input equipment is not institute's resources reserved, but switches so that it has the focus of equipment separately between system applies and recreation application.Application manager is preferably controlled the switching of inlet flow, and need not to know the knowledge that recreation is used, and driver is kept the status information that relevant focus is switched.Control desk 100 can receive additional inputs from the degree of depth camera system 20 of Fig. 2 of comprising camera 26 and 28.
Fig. 4 has described another example block diagram of the computing environment that can use in the motion capture system of Fig. 1.
In motion capture system, this computing environment can be used for explaining that one or more postures or other move and the visual space on the update displayed picture in response.Computing environment 220 comprises computing machine 241, and computing machine 241 generally includes various tangible computer-readable recording mediums.This can be can be by any usable medium of computing machine 241 visit, and comprises volatibility and non-volatile media, removable and removable medium not.System storage 222 comprises the computer-readable storage medium of volatibility and/or nonvolatile memory form, as ROM (read-only memory) (ROM) 223 and random-access memory (ram) 260.Basic input/output 224 (BIOS) comprises that it is stored among the ROM 223 usually as help the basic routine of transmission information between the element in computing machine 241 when starting.RAM 260 comprises processing unit 259 usually can zero access and/or present data and/or program module of operating.Graphic interface 231 communicates with GPU 229.As example but not the limitation, Fig. 4 has described operating system 225, application program 226, other program module 227 and routine data 228.
Driver of more than discussing and describing in Fig. 4 and the computer-readable storage medium that is associated thereof provide storage to computer-readable instruction, data structure, program module and other data for computing machine 241.For example, hard disk drive 238 is depicted as storage operating system 258, application program 257, other program module 256 and routine data 255.Notice that these assemblies can be identical with routine data 228 with operating system 225, application program 226, other program modules 227, also can be different with them.It is in order to illustrate that they are different copies at least that operating system 258, application program 257, other program modules 256 and routine data 255 have been marked different labels here.The user can pass through input equipment, such as keyboard 251 and pointing device 252 (being commonly called mouse, tracking ball or touch pads), to computing machine 241 input commands and information.Other input equipment (not shown) can comprise microphone, operating rod, game paddle, satellite dish, scanner or the like.These and other input equipments are connected to processing unit 259 by the user's input interface 236 that is coupled to system bus usually, but also can such as parallel port, game port or USB (universal serial bus) (USB), be connected by other interfaces and bus structure.The degree of depth camera system 20 that comprises Fig. 2 of camera 26 and 28 can be control desk 100 definition additional input equipment.The display of monitor 242 or other types is connected to system bus 221 also via interface such as video interface 232.Except that monitor, computing machine also can comprise other peripheral output device, and such as loudspeaker 244 and printer 243, they can connect by output peripheral interface 233.
When using in the LAN networked environment, computing machine 241 is connected to LAN 245 by network interface or adapter 237.When using in the WAN networked environment, computing machine 241 generally includes modulator-demodular unit 250 or is used for by setting up other devices of communication such as WAN such as the Internet 249.Modulator-demodular unit 250 can be internal or external, and it can be connected to system bus 221 via user's input interface 236 or other suitable mechanism.In networked environment, can be stored in the remote memory storage device with respect to computing machine 241 described program modules or its part.And unrestricted, Fig. 4 shows remote application 248 and resides on the memory devices 247 as example.It is exemplary that network shown in being appreciated that connects, and can use other means of setting up communication link between computing machine.
This computing environment can comprise the tangible computer-readable storage that includes computer-readable software on it, and this computer-readable software is used at least one processor programmed and carries out the method that is used to generate the representative training data that is used for human body tracking described herein.This tangible computer-readable storage for example can comprise, one or more in the assembly 222,234,235,230,253,254.In addition, one or more processors of this computing environment can be provided for generating the method for the processor realization of the representative training data that is used for human body tracking, comprise the step that processor described herein is realized.Processor can comprise one or more in the assembly 229 and 259 for example.
Fig. 5 has described to be used for following the tracks of with the fidelity of improving in motion capture system the method for user's hand.Step 500 is included in the visual field of degree of depth camera system and follows the tracks of the user.Further details is seen for example Fig. 6.Step 502 comprises the initial estimation that obtains hand position based on track algorithm.Notice that described process relates to single hand, but this process or is usually used for one or many hands of the one or more users in the visual field applicable to the position of second hand determining given user.Initial estimation can from as obtain about the described usertracking of Fig. 6.Step 504 comprises the renewal hand position.Further details is seen for example Fig. 7 A and 7D.Step 506 comprises based on hand position provides the control input to application.
Input can be for example according in the visual field for example by in the cartesian coordinate system (z) position of user's hand is represented in the coordinate point position of expressing for x, y.Fig. 1 provides the example of cartesian coordinate system.Step 510 is included in application and manages the control input everywhere.This for example can relate to hand based on the user and move update displayed picture etc., starts recreation and use or carry out any amount of other actions.
Fig. 6 has described to be used for as following the tracks of individual's the exemplary method that moves as described in the step 500 of Fig. 5.This exemplary method can use degree of depth camera system 20 and/or the computing environment 12,100 or 220 for example discussed in conjunction with Fig. 2-4 to realize.Can scan one or more people and generate model, as any other suitable expression of skeleton pattern, grid people class model or individual.In skeleton pattern, each body part can be characterized as being the joint of definition skeleton pattern and the mathematics vector of bone.Body part can relative to each other move at joint.
This model can be used for the application of being carried out by computing environment mutual then.The scanning that is used for generation model can take place when starting or run application, or as the individual's scanned application program take place at other times with controlling.
Can scan the individual and generate skeleton pattern, can follow the tracks of the active user interface that this skeleton pattern makes user's physics move or moves and can be used as adjustment and/or control the parameter of using.For example, the individual who is followed the tracks of mobile is used in the electronics RPG (Role-playing game) personage on mobile incarnation or other screens; The control screen is got on the bus in the electronics car race game; The formation or the tissue of control object in virtual environment; Or any other suitable control of execution application.
According to an embodiment,, for example receive depth information from degree of depth camera system in step 600.The visual field that can comprise one or more targets can be caught or observe to degree of depth camera system.In an example embodiment, as discussed, degree of depth camera system can use such as any suitable technique such as ToF analysis, structured light analysis, stereoscopic vision analyses and obtain the depth information that is associated with one or more targets in the capture region.As discussed, depth information can comprise depth image or the figure (map) with a plurality of observed pixels, and wherein each observed pixel has observed depth value.
Depth image can be down-sampled the lower and manage resolution, so that it can more easily be used and handle with computing cost still less.In addition, can from depth image, remove and/or smoothly fall one or more high variations and/or the depth value of making an uproar is arranged; Can insert and/or the part of the depth information that reconstruct lacks and/or remove; And/or can carry out any other suitable processing to the depth information that is received, make this depth information can be used for generating also in conjunction with Fig. 8 discuss such as models such as skeleton patterns.
At determination step 604, judge whether depth image comprises human target.This can comprise that each target in the depth image or object are carried out film color fills, and each target or object and a pattern are compared to determine whether this depth image comprises human target.For example, can compare institute's favored area of depth image or the edge that each depth value of pixel in the point is determined aforesaid target of definable or object.Can come that possible Z value of Z layer carried out film color based on determined edge fills.For example, the pixel that is associated with determined edge and the pixel in intramarginal zone can interrelatedly define the target or the object that can compare with pattern in the capture region, and this will be described in more detail below.
If determination step 604 is true, then execution in step 606.If determination step 604 is false, then receive additional depth information in step 600.
Contrast one or more data structures that its pattern that comes each target of comparison or object can comprise one group of variable with the typical human body of common definition.The information that is associated with the pixel of human target in the visual field for example and non-human target can compare with each variable and identify human target.In one embodiment, each variable in this group can come weighting based on body part.For example, wait each body part to have to be associated such as head and/or shoulder in the pattern with it, can be greater than weighted value such as other body parts such as legs.According to an embodiment, target and variable can compared to determine whether target is human and which target is used weighted value can be the mankind time.For example, the coupling with big weighted value between variable and the target can produce the possibility that this target bigger than the coupling with less weighted value is the mankind.
Step 606 comprises that scanning human target seeks body part.Can scan human target provide with individual one or more body parts be associated such as length, width isometry, so that this individual accurate model to be provided.In an example embodiment, can isolate this mankind's target, and can create this mankind's target the position mask scan one or more body parts.This mask can be filled by for example human target being carried out film color, makes this mankind's target to separate with other targets in the capture region element or object and creates.Can analyze this mask subsequently and seek one or more body parts, to generate the model of human target, as skeleton pattern, grid people class model etc.For example, according to an embodiment, can use the metric of determining by the position mask that is scanned to define one or more joints in the skeleton pattern.These one or more joints can be used for defining can be corresponding to one or more bone of the mankind's body part.
For example, the top of the position mask of human target can be associated with the position at the top of head.After the top of having determined head, can scan the position that this mask determines subsequently neck, position of shoulder or the like downwards.For example, the width at the position of the position of being scanned mask can compare with the threshold value of the representative width that is associated with for example neck, shoulder etc.In alternative embodiment, can use in the mask of offing normal the distance of position previous scanning and that be associated with body part to determine the position of neck, shoulder etc.Some body part such as leg, pin etc. can calculate based on the position of for example other body parts.After the value of having determined body part, can create the data structure of the metric that comprises body part.This data structure can comprise from degree of depth camera system average scanning result a plurality of depth images that different time points provides.
Step 608 comprises the model that generates human target, comprises the initial estimation of hand position.In one embodiment, can use the metric of determining by the position mask that is scanned to define one or more joints in the skeleton pattern.These one or more joints be used to define can with corresponding one or more bone of the mankind's body part.
Can adjust one or more joints, up to these joints within the mankind's joint and the typical range scope between the body part, to generate skeleton pattern more accurately.This model can further be regulated based on the height that for example is associated with human target.
In step 610, more follow the tracks of this model in the position of new individual by the per second several times.When the user is mobile in physical space, use and adjust skeleton pattern from the information of degree of depth camera system, make this skeleton pattern represent a people.Particularly, can apply one or more power to the one or more stressed aspect of this skeleton pattern, with this skeleton pattern be adjusted to closer with physical space in the corresponding attitude of human target attitude.
Generally speaking, can use any known technology that moves that is used to follow the tracks of the individual.
Fig. 7 A has described to be used for as upgrading the exemplary method of hand position as described in the step 504 of Fig. 5.Step 700 comprises to be carried out smoothly the initial estimation of hand position, with the level and smooth estimation of warp that the position is provided.Start from the original input that the external trace system provides, this step is created the level and smooth version of warp of hand conjecture, to weaken the effect of the slight unstable or shake in the tracker.This can use realizes minimized bolt system (tether) technology based on interpolation of appreciable stand-by period.Obtain more details referring to Fig. 7 A and 7B.Step 702 comprises based on the body in the current estimation sign visual field, and wherein this body is estimated as the center or locatees in addition based on current estimation with current.Referring to for example Figure 10.This body can be the 3-D body such as the cuboid that comprises cube, or spheroid.The center of the little average body in the high resolving power depth image is served as in current estimation.Step 704 comprises the position of this body of search with the sign hand.The degree of depth (z) value is arranged on the whole body that hand covered.They all can be by average together, rather than rim value only.Guaranteed that conjecture may shake from the hand of tracker although get local average from depth map, as long as the depth image maintenance is moderately stable, then to name a person for a particular job be stable to the hand of Chan Shenging.Step 706 comprises the new estimation that hand position on average is provided that fetch bit is put.Mean place can be the representative points as the mean value of the depth value that is positioned at average body.Step 708 comprises to be carried out smoothly the new estimation of hand position, is similar to the process of step 700.The value that the warp of step 708 is level and smooth is the example of the value that draws of the new estimation of the position from step 706.
Because camera sensor causes in the depth image inherent intrinsic some noisiness, perhaps operating conditions can increase the noisiness in the depth image significantly, may wish that therefore further smoothing processing further stablizes hand position.This can use the bolt based on interpolation similar to the above is technology, and this technology is smoothly fallen any little noise that produces from local average.
Notice that the step that provides not is all to need, and specified order also can change in many cases in this process flow diagram and other process flow diagrams.
Fig. 7 B has described to be used for as carrying out level and smooth exemplary method as described in the step 700 of Fig. 7 A.Step 710 comprises the discrepancy delta between the location estimation of hand position at the initial estimation of determining current time place's hand position and last time point place, the described current time, for example from the present frame of the depth data of motion tracking system, described last time point was such as the former frame place.For example, initial estimation can be represented that last estimation can represent that wherein i is a frame index by coordinate (x (ti-1), y (ti-1), z (ti-1)) by coordinate (x (ti), y (ti), z (ti)).This difference can be represented as size or indicate the vector of size and Orientation.Determination step 712 judges that whether Δ is less than threshold value T.T can be based on moving and factors such as mobile character on the display that provides are provided with to the size of frame, human perception, display and/or resolution and by using based on the hand as the control input such as the expection moving range of hand, frame.In general, be considered to relative to little moving, make and to use that to have applied for the user will be the smoothing processing of acceptable a certain stand-by period less than the mobile of T.Be not less than the mobile of T and be considered to, make that should not apply smoothing processing will be the smoothing processing of tangible slightly a certain stand-by period to avoid applying the significant stand-by period, perhaps can use applying for the user relative to big moving.
Might make T be suitable for the environment of motion capture system.For example, when using relatively large display, it may be suitable using less T value, because the stand-by period of specified rate will be tangible for the user.Still is that Δ 〉=T provided two working specifications to the use of the value of T based on Δ<T in given situation.Also may use two or more T values, make to define three or more working specifications, and can adjust the stand-by period according to each rules.For example, for T2>T1, can provide the rules of Δ<T1, T1<Δ<T2 and T2<Δ.
If at determination step 712 Δs<T, then step 714 comprises provides in the position at last time point place and the current estimation of the hand position between the initial estimation, makes current estimation fall behind initial estimation.About further details please referring to Fig. 9 A.If at determination step 712 Δs 〉=T, then step 716 comprises and is made as with initial estimation current estimation basic identical.Perhaps, the current estimation of hand position can be between the position and initial estimation of last time point, and as in the step 714, but current estimation is than less falling behind initial estimation in the step 714.Basic identical refer to equal value, round up or the scope of truncation error in.About further details please referring to Fig. 9 B.
Stand-by period is level and smooth negative side-effects.Smoothly be target, this is a strategy of hiding the tangible stand-by period.New estimate that degree of closeness to current or last estimation is based on ratio Δ/T.In fact, when Δ during significantly less than threshold value, this has increased level and smooth amount.The speed that the negative side-effects that this dynamic smoothing allows the stand-by period also moves based on hand and dynamic change.When making fast moving, use little smoothly or not using smoothly, thereby the stand-by period is low.When palmistry when stablizing, use with higher stand-by period more level and smooth, but owing to almost there be not moving of hand, so the stand-by period is not to be very perceptible.
Fig. 7 C has described to be used for as carrying out another level and smooth exemplary method as described in the step 700 of Fig. 7 A.Step 720 is identical with the step 710 of Fig. 7 B and 712 respectively with 722.If at determination step 722 Δs<T, then step 724 comprises provides interpolate value Interp.=Δ/T.Because the scope of Δ 0 and T between, so the scope of Interp. will be between 0 and 1.Step 726 comprises from relation: initial estimation+(Δ x Interp.) provides current estimation, and this relation can be expressed as: initial estimation+Δ
2/ T.In essence, current estimation can be the nonlinear function of Δ.In this example, Δ by square, promptly be raised to power 2 times.In general, Δ can be raised to the power greater than 1, as Δ
1.5Denominator should be modified, and makes the computer capacity from 0 to 1 of interpolation.Many modification are possible.If at determination step 722 Δs 〉=T, then step 728 comprises that the current estimation with hand position is provided as with initial estimation basic identical.Thereby when Δ during less than threshold value, current estimation non-linearly changes from the distance of initial estimation, and making Δ, more little it is big more, and it is more little and Δ is big more.That is to say that Δ is more little, the backward initial estimation of current estimation is many more, and Δ is big more then backward few more.And, current estimation from the distance of initial estimation Δ during near threshold value near zero.That is to say that when Δ during near threshold value, current estimation is near initial estimation.
At this bolt based on interpolation is in the technology, and backward point is created and becomes to follow original input point.The position that falls behind point is updated in the mode that the elasticity bolt that is similar to the original input that depends on such as initial estimation is.The distance that falls behind point and itself and original input moves towards original input pro rata.If original input away from falling behind point, then falls behind point and moves to original input fast, to adapt to fast moving.If original input near falling behind point, then falls behind point and moves to original input at a slow speed, smoothly fall little shake and move.This bolt is that an embodiment of technology is to use the linear interpolation between original input point and the backward point.Interpolate value equals original input and falls behind distance between the point divided by fixing ultimate range T.During the high speed hand moved, original input was moved away from falling behind point, makes interpolate value near 1, and the interpolated point that causes calculating is near original input.During moving at a slow speed, interpolation causes the quite fixing result who calculates near zero.This method minimizes the observed stand-by period during the fast moving, the strong smooth effect when having kept mobile at a slow speed simultaneously.
Fig. 7 D has described to be used for as upgrading the exemplary method of hand position as described in the step 504 of Fig. 5.This is to use the more complex embodiments of attempting to detect and proofread and correct by the searching method that analysis depth figure locatees hand the big time error in the tracker.The conjecture of hand that the external trace system is provided only be used to this more complex embodiments satisfy the searching algorithm of resetting in the situation of specified criteria, wherein be placed directly in the health both sides such as the hand position of estimating, during near the front or near the another hand known this embodiment show poor.
Step 730 comprises one group of reference point in the sign 3-D skeleton pattern.Reference point for example can comprise other points on shoulder, head or the upper torso, as the line between the shoulder and the center line of upper torso.These are the reference point that can help the position of definite hand.Step 732 comprises stable reference point.For example, this can comprise determining whether a certain reference point is blocked by another body part, as discussing about Fig. 7 E.This step has been created and can be used as one group of stable upper body point that reference point begins to search for.It can relate to seek stable head position, shoulder position and basic health towards.Can provide by the external trace system each the conjecture in these joints.Can use trial method to come or level and smooth or ignore conjecture from tracker.Block at arm under the situation of head and/or shoulder, it may be very unstable or insecure following the tracks of conjecture.In these cases, can be for example detect to the degree of closeness in upper body joint and block by measuring arm joint in the projection camera image.If according to defined such based on the threshold distance of experiment or test, arm joint is near the upper body joint, then the situation of blocking may take place.In addition, level and smooth intensity can be inconsistent along each axle.For example, much higher along comparable transverse axis of the instability of Z-axis or positive axis.The combination of these technology can be used to generate stable upper body point.
And, may be to a great extent in the situation perpendicular to camera axis (z axle) in known users towards the line between camera and the omoplate, for the stability that increases, it may be useful forcing this orientation constraint.The omoplate vector is defined as the vector that extends to the camera center perpendicular to from the shoulder center.
Step 734 comprises that definition begins and extend at least one vector of the hand position of determining from the last time point of for example time ti-1 from reference point.About further details please referring to Figure 11 A.Step 736 comprises this at least one vector of traversal, search arm end.This comprises based on coming with respect to the distance of described at least one vector to the position candidate scoring, as further discussing about Figure 11 C.In case found stable reference point such as shoulder in step 732, then can take on each last hand position definition arm search vector from each.If previous hand position is unavailable, the original hand conjecture that then can use tracking to provide.Arm search vector has defined the roughly direction of palmistry for shoulder.From the frame to the frame, hand may connect and be bordering on their last position, therefore can also follow the tracks of hand along the best hand end of search vector searching candidate by incrementally more new search is vectorial.Can deduct them along the distance of vector according to them along the terminal candidate of search vector marks to vectorial vertical range.This is partial to search for farther point in the vectorial direction, but is unfavorable for departing from out and away the candidate of an end.Maximum arm length also is used to limit the distance of search.When search vector generate too near health or be considered to difference in addition terminal as a result the time, end can be set as the original hand conjecture that the external trace system is provided.For example, can mean in a certain fixed range near health, 5cm for example, or based in the distance of the tolerance of trunk diameter or shoulder breadth (for example 25% of shoulder breadth), or with respect to below the minimum angles of the vertical orientation of health (for example 20 of the Z-axis of health degree are following)." poor " can be defined as jumping very much, exceeding outside the certain threshold level restriction of distance or direction, or too far away from initial estimation.
After having selected final terminal conjecture, the local average of desirable high resolving power depth map comes level and smooth above-mentioned hand position.Similarly, may wish that it is level and smooth carrying out the final bolt based on interpolation, with the noise (step 744) in the further reduction hand position.The value that the warp of step 744 is level and smooth is the example of the value that draws of the new estimation of the position from step 742.
Step 738,740,742 and 744 can be respectively with Fig. 7 A in step 702,704,706 and 708 identical.
Fig. 7 E has described to be used for the exemplary method as the reference point of ground stable model as described in the step 732 of Fig. 7 D.Step 750 comprises whether the reference point of determining in the health is blocked.Referring to for example Figure 12 A.For example, in step 752, this can comprise measure at least one arm joint (as elbow or wrist) at least one the upper body position degree of closeness of (other points on shoulder, head or the upper torso for example comprise the line between the omoplate and the center line of upper torso).In step 754, if determine that reference point is blocked, then its position is determined based at least one other reference point that is not blocked in the 3-D skeleton pattern.For example, if a shoulder position is known with respect to the center line of upper torso, then can determine another shoulder position.
Fig. 7 F has described to be used for as upgrading another exemplary method of hand position as described in the step 504 of Fig. 5.Step 760 comprises based on the new hand position of first technology acquisition.Determination step 762 determines whether the result is satisfactory.If determination step 762 is true, then process finishes in step 766.If determination step 762 is false, then step 764 comprises based on the new hand position of second technology acquisition.This method can be extended to and also use other technologies.For example, first technology can be the method for Fig. 7 A in conjunction with Fig. 7 B or 7C, and second method can be the method for Fig. 7 D in conjunction with Fig. 7 E.
For example, only use the external trace conjecture to make technology more healthy and stronger for the time error in the external trace system as the strategy of the point of resetting when searching method is failed, vice versa.This technology only can not provide rational hand point when the method for two kinds of tracking hand positions is failed simultaneously.Given one can correctly select or make up the management system of the output of multiple tracking processing, and this method can be extended to the 3rd or the cubic method that comprises that hand is followed the tracks of, and has further reduced the failure situation.In general, be used for determining in step 762 when the result is that gratifying criterion can define based on one group of known criterion, the known criterion of this group defined algorithm wherein can be under the situation of given previous test the motion or the position range of performance difference, for example when hand be placed on the health both sides, during near chest or near the another hand.
Fig. 8 has described the example model as the described user of step 608 of Fig. 6.Towards degree of depth camera, the cross section shown in making is the x-y plane to model 800 on the z direction.Note vertical be the y axle and horizontal be the x axle.Similar labelling method is provided in other accompanying drawings.This model comprises a plurality of reference point, the bottom of the top as 802, head or chin 813, right shoulder 804, right elbow 806, right wrist 808 and the right hand of being represented by for example fingertip area 810.Right side and left side are from the viewpoint definition of observing towards the user of camera.This is the initial estimation of hand position.Hand position 810 is based on the fringe region of determining 801 of hand.Yet, as described, because may there be certain error in noise or other factors in this initial position is determined.The range of indeterminacy of the area identification hand position between the zone 801 and 803.Other method is to represent hand position by the central point of hand.This model comprises left side shoulder 814, left elbow 816, left wrist 818 and left hand 820.Also described lumbar region 822, and right hip 824, right knee 826, right crus of diaphragm 828, left hip 830, left knee 832 and left foot 834.Shoulder line 812 is lines of the level that is generally between shoulder 804 and 814.Also described the upper torso center line 825 that for example between point 822 and 813, extends.
When Fig. 9 A has described to be used for difference between initial estimation and last estimation less than threshold value as carrying out level and smooth example technique as described in the step 700 of Fig. 7 A.At this, point 902 is represented for example last estimation of time ti-1 place hand position.In this example, point 902 is that radius is the center of the spherical of T, and wherein T is a threshold value as discussed previously.The expression of point 810 (consistent) current time, the i.e. initial estimation of the hand position at time ti place with Fig. 8.Δ is point poor between 810 and 902.Δ can be for example from putting 902 to the vector of putting on 810 directions.The current estimation of point 904 expression hand positions, it falls behind the distance of point 810 Δ x Interp. based on interpolate value.The amount that falls behind is the distance between initial estimation and the current estimation.Point 904 from point 902 distance less than the distances of point 810, and along from putting 902 to 810 vector from point 902.In body 900, along with Δ near T, interpolate value is near 1.For Δ 〉=T, interpolate value=1.
When Fig. 9 B has described to be used for difference between initial estimation and last estimation more than or equal to threshold value as carrying out level and smooth example technique as described in the step 700 of Fig. 7 A.Identical among point 902 and body 900 and Fig. 9 A.Point 906 expression current time, the i.e. alternative initial estimation of the hand position at time ti place.Notice that point 906 is outside body.Δ is point poor between 810 and 906.Δ can be for example from putting 902 to the vector of putting on 906 directions.Current and the initial estimation of point 906 expression hand positions.Here, Δ 〉=T, so interpolate value=1.
Figure 10 has described to be used for as the example technique of the new estimation of hand position is provided as described in the step 704 and 706 of Fig. 7 A.Body 900 is consistent with Fig. 9 A with point 904.In case obtained to put 904 current estimation, the episome 1000 definable such as spheroid, cuboid or the cube, its mid point 904 is positioned at the center.This body is searched to detect existing or not existing of hand.That is to say that for each point in the body, making about this point is determining of the expression free space or a certain position of the model of health.Thereby can in the 3-D space, detect the edge by a part and the transition between the free space of detection model.Example points on the detected edge 1002 is represented by circle.Point can extend around the body part 1006 that is assumed to be hand in 3-D.Can get the depth-averaged value of body part 1006 based on the depth value (Z) on the whole body that hand covered, to obtain point 1004 as the new estimation of hand position.Point 1004 can be the average of all fringe region points.
Figure 11 A has described as defining at least one vectorial example as described in the step 734 of Fig. 7 D.3-D model 1100 has described to comprise right shoulder joint 1104, right elbow joint 1106 and as the people's of the point 1110 of the initial estimation of hand position a part.The profile of hand is depicted as and extends to a little outside 1110, and it is bigger than in fact to show hand, has certain inaccuracy when the end points of expression hand with explanation point 1110.This inaccuracy can be caused by the type of noise, employed hand detection algorithm and other factors of being discussed.A kind of technology that is used to improve the accuracy of hand position relates to the one or more vectors of definition such as extend to a little 1110 vector 1112 and 1114 from shoulder, and point 1110 is initial estimation of hand position.In this example, arm is crooked, makes the represented forearm of vector 1114 to extend with the represented significantly different direction of upper arm of vector 1112.Use to forearm vector 1114 will be enough in this example.In other cases, arm can be more straight relatively, wherein can use single vector, for example the initial estimation from the shoulder joint to the hand position.In addition, be under unavailable or insecure situation in estimation to elbow, may be enough in many cases directly from takeing on vector in one's hands.In another example, use along leg, for example the vector from the hip to the pin or from the knee to the pin is determined placement of foot.
This notion has been utilized one or more reference point of health, for example shoulder or elbow joint during to the estimation of hand position in refining.
Figure 11 B has described as searching for the example of arm end as described in the step 736 of Fig. 7 D.The one or more vectors that define among Figure 11 A are traveled through with the position candidate of sign hand position and the scoring that defines each position candidate.As the example of a simplification, each circle is represented the hand position once assessment.
Each circle is represented the hand position once assessment.Hand position through assessment can be constrained to a certain distance (as the desired extent based on arm thickness) that is positioned at the one or more vectors of vertical off setting, and extends in a certain distance outside the initial estimation of hand position the desired extent of arm length (for example based on) at least one vectorial direction.Each is evaluated to determine whether it is the part of 3-D model, for example whether has the depth map data of this point through the hand position of assessment.
Opening or white circle representative are the hand positions through assessment of 3-D model part, and are candidate's hand positions therefore.The dark circles representative is not the hand position through assessment of the part of 3-D model.In this example, point 1116 is confirmed as having candidate's hand position of the highest scoring, therefore becomes the new estimation of hand position.
Figure 11 C described as described in the step 736 of Fig. 7 D to the example of position candidate scoring.Each candidate's hand position can based on its along the distance of at least one vector with and the distance of vertically leaving this at least one vector marked.In a method, scoring equals to deduct along the distance of this at least one vector leaves this at least one vectorial vertical range.For example, traveling through vectorial 1114 o'clock, the scoring of point 1116 is d2-d1, and wherein d2 is the distance along vector 1114, and promptly from the distance of elbow joint 1106, d1 is perpendicular to the distance of vector 1114.This method deflection along vector farthest and the nearest candidate point of descriscent amount.Also can use other scoring technologies,, the distance along vector be provided the technology of different weight for example than distance perpendicular to vector.Position with the highest scoring can be considered to most possible hand position.
Figure 12 A has described as the step 750 of Fig. 7 E described, wherein the exemplary elevation views of the user's that is blocked of the reference point of health model.In some cases, the reference point that can be used in the body model of refining hand position may be blocked.For example, as can being blocked by the arm that the user lifts with reference to the shoulder of putting, this is the situation in this example.
Towards degree of depth camera, the cross section shown in making is the x-y plane to model 1200 on the z direction.This model comprises a plurality of reference point, the bottom of the top as 1202, head or chin 1213, right shoulder 1204, right elbow 1206, right wrist 1208 and the right hand 1210.Point 1210 can be the initial estimation of hand position.This model comprises left side shoulder 1214, left elbow 1216, left wrist 1218 and left hand 1220.Also described lumbar region 1222, and right hip 1224, right knee 1226, right crus of diaphragm 1228, left hip 1230, left knee 1232 and left foot 1234.Shoulder line 1212 is lines of the level that is generally between shoulder 1204 and 1214.Also described the upper torso center line 1225 that for example between point 1222 and 1213, extends.As seen, take on 1204 arms that lifted by the user and block.
As discussing about Figure 11 A-11C, when being used as reference point, shoulder 1204 defines one or more whens vector that are used for the refining hand position, and shoulder point 1204 facts that are blocked can cause the difficulty when accurate its position of definition.In this case, be used for takeing on a little stabilization procedures and relate to other reference point that are not blocked of using health and confirm and/or define a shoulder point position, as further discussing about Figure 12 C and 12D.
Figure 12 B has described the side view of the model of Figure 12 A.Here, can see that user's hand lifts in the health front portion, make the part of health be blocked from it seems towards the degree of depth camera of z direction.Notice that lifting right arm or left arm in the user front is to provide employed common attitude in the gesture actions of controlling input to application.Yet other attitudes also can cause blocking.
Figure 12 C has described the projection camera image view of the model of Figure 12 A.Projection camera image view is the 2-D view of 3-D body model, shows the relative position in the plane of the reference position of health.The reference position of Figure 12 C is corresponding to the position of same numeral among Figure 12 A, but for the sake of clarity, the profile of body model is removed.In addition, as example depiction from some distances of upper torso center line 1225, that is: d3 (right hand 1210), d4 (right wrist 1208, d5 (right shoulder 1204-is identical with d5), d5 (left side takes on 1214), d6 (right elbow 1206), d7 (left elbow 1216) and d8 (left wrist 1218).
One or more positions of not blocking a little in the 3-D model can be used for determining the position of shoulder point 1204.For example, shoulder point 1204 can be assumed to be the distance identical with takeing on a little 1214 decentering lines 1225.In some cases, definable makes the further refining of shoulder point 1204 quilts for being positioned on the line 1212 from shoulder 1214 line 1212 to center line 1225 extensions.
In addition, shoulder point 1204 possibilities that are blocked can be determined by multiple diverse ways, for example by determine the position of right arm based on the position of wrist 1208 and elbow 1206.In some cases, from the center line to the wrist 1208 or the absolute distance of elbow 1206 can indicate and block.In addition, the distance of wrist from the center line to the opposite side 1218 or elbow 1216 can with from the center line to the wrist 1208 or the distance of elbow 1206 compare.Equally, from the center line to the wrist 1208 or the distance of elbow 1206 can with from takeing on 1214 approximate distance to center line 1225, promptly d5 compares.Various other trial methods and tolerance can be used for also determining whether the situation of blocking exists, and the position of determining the point that is blocked.Also can be in the orientation of determining whether the situation of blocking exists and using a model during the position of the point determining to be blocked.
Figure 12 D has described the vertical view of the 3-D model of Figure 12 A.From user's left side shoulder 1214 to body centre's line that can pass crown point 1202 apart from d5, and can be used to determine the position of right shoulder 1204 from 1214 beginnings of user's left side shoulder and the line 1212 that passes center line, this be by suppose it also be from along the line 1225 center line apart from d5 (d5=d5).In this example, the user in-z direction directly towards camera.Yet, if user's health in another orientation, for example with respect to the rotation of-z axle, described technology also can be used.
The foregoing detailed description of said technology is in order to illustrate and to describe and provide.Be not to be intended to exhaustive present technique or it is limited to disclosed precise forms.In view of above-mentioned instruction, many modifications and modification all are possible.Select the foregoing description to explain the principle and the application in practice thereof of present technique best, other people can be in various embodiments and utilize present technique together best with the various modifications that are suitable for the special-purpose conceived thereby make this area.The scope of present technique is intended to be defined by appended claims.
Claims (10)
1. one kind is used for following the tracks of the method that processor that the user moves is realized in motion capture system, comprises the step that following processor is realized:
In the visual field (6) of motion capture system (10), follow the tracks of user's hand (810) in time, be included in the 3-D depth image (800) that different time points obtains hand; And
For a time point:
Obtain the initial estimation (810) of the position of hand in the visual field based on described tracking;
Determine the difference (Δ) of initial estimation with respect to the corresponding estimation (902) of last time point;
Judge that whether this difference is less than threshold value (T);
If this difference is less than threshold value, then by initial estimation being changed the current estimation (904) that the position is provided less than the amount (Δ xInterp.) of described difference; And
If this difference is not less than threshold value, then provide the current estimation of position in fact according to described initial estimation;
Based on described current estimation, the body (1000) in the definition visual field;
Search 3-D depth image is determined the new estimation (1004) of the position of hand in the visual field in this body;
Location-based at least in part new estimation or come the application of the hand to represent the visual field that control input is provided from the value that the new estimation of position draws.
2. the method that processor as claimed in claim 1 is realized is characterized in that:
When described difference during less than threshold value, it is many more that described amount is configured such that described current estimation hour falls behind described initial estimation more in described difference, and it is few more to fall behind described current estimation when described difference is big more.
3. the method that processor as claimed in claim 1 or 2 is realized is characterized in that:
When described difference during less than threshold value, described amount be configured such that described current estimation along with described difference near described threshold value and near described initial estimation.
4. the method that realizes as each described processor in the claim 1 to 3 is characterized in that, also comprise following, for a described time point:
Determine the new corresponding estimated differences of estimating with respect to last time point;
Judge that whether new this difference of estimating is less than described threshold value;
If new this difference of estimating is less than described threshold value, then by will newly estimating to change the new current estimation that the position is provided less than the amount of this difference of described new estimation; And
If new this difference of estimating is not less than described threshold value, the new current estimation of position then is provided according to described new estimation in fact, comes to provide the control input based on this new current estimation or from the value that this new current estimation draws at least in part to described application.
5. as the method for each the described processor realization in the claim 1 to 4, it is characterized in that:
Described search is included in the position at the edge of sign hand in the body, and the mean value of the position at definite edge.
6. the method that realizes as each the described processor in the claim 1 to 5 is characterized in that, also comprises:
In the visual field (6) of motion capture system (10), follow the tracks of user's health (8), comprise the 3-D skeleton pattern (800) that obtains 3-D depth image and definite health; And
Obtain the initial estimation (810) of the position of hand in the visual field by the position (1110) of hand in the sign 3-D skeleton pattern;
Wherein for next time point, the initial estimation (810) of the position of hand comprises in the described acquisition visual field:
The reference point (1104) of sign 3-D skeleton pattern;
Definition at least one vector (1112,1114) from this reference point of this next time point to the position (1110) of the hand of a described time point;
Travel through described at least one vector to seek the most probable hand position of described next time point, comprise marking, mark based on position candidate along the vertical distance (d1) of leaving described at least one vector of the distance (d2) of described at least one vector with position candidate as the position candidate of 3-D skeleton pattern part.
7. the method that processor as claimed in claim 6 is realized is characterized in that:
To position candidate scoring comprise provide a scoring, this scoring indicate with along the distance of described at least one vector increasing proportional and with the vertical more and more littler proportional more possible position of distance of leaving described at least one vector.
8. as the method for claim 6 or 7 described processors realizations, it is characterized in that:
The reference point of described 3-D skeleton pattern is based on from one group of point (804,814) sign of described 3-D skeleton pattern sign, and this group point identifies the both shoulders of described 3-D skeleton pattern at least.
9. as the method for each the described processor realization in the claim 6 to 8, it is characterized in that:
The reference point of described 3-D skeleton pattern is based on from one group of point identification of described 3-D skeleton pattern sign; And
Described method comprises also when at least one point (1204) of determining in described one group of point may be blocked, and in response to determining that described at least one point in described one group of point may be blocked, and determines the position of described at least one point based at least one other point (1214) of described 3-D skeleton pattern.
10. tangible computer-readable storage that includes computer-readable software on it, described computer-readable software are used at least one processor programmed and carry out a kind of method of motion capture system, and described method comprises:
In the visual field (6) of motion capture system (10), follow the tracks of user's hand (810) in time, be included in the 3-D depth image (800) that different time points obtains hand; And
For a time point:
Obtain the initial estimation (810) of the position of hand in the visual field based on described tracking;
Determine the difference (Δ) of initial estimation with respect to the corresponding estimation (902) of last time point;
Judge that whether this difference is less than threshold value (T);
If this difference less than threshold value, then treats the time that by force first-class on initial estimation (Δ xInterp.) provides the current estimation (904) of position;
If this difference is not less than threshold value, then by one of the following current estimation (906) that the position is provided: (a) according to initial estimation current estimation is set in fact, and the little first-class of forcing on described initial estimation when (b) forcing than described difference less than described threshold value of described stand-by period is treated the time on described initial estimation; And
Location-based at least in part current estimation or come the application of the hand to represent the visual field that control input is provided from the value that the current estimation of position draws.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/767,126 US8351651B2 (en) | 2010-04-26 | 2010-04-26 | Hand-location post-process refinement in a tracking system |
US12/767,126 | 2010-04-26 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN102184009A true CN102184009A (en) | 2011-09-14 |
CN102184009B CN102184009B (en) | 2014-03-19 |
Family
ID=44570193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110112867.2A Active CN102184009B (en) | 2010-04-26 | 2011-04-25 | Hand position post processing refinement in tracking system |
Country Status (2)
Country | Link |
---|---|
US (2) | US8351651B2 (en) |
CN (1) | CN102184009B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103537099A (en) * | 2012-07-09 | 2014-01-29 | 深圳泰山在线科技有限公司 | Tracking toy |
CN103970264A (en) * | 2013-01-29 | 2014-08-06 | 纬创资通股份有限公司 | Gesture recognition and control method and device |
CN104380729A (en) * | 2012-07-31 | 2015-02-25 | 英特尔公司 | Context-driven adjustment of camera parameters |
CN104463906A (en) * | 2014-11-11 | 2015-03-25 | 广东中星电子有限公司 | Object tracking device and method |
CN104978554A (en) * | 2014-04-08 | 2015-10-14 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN105580051A (en) * | 2013-10-30 | 2016-05-11 | 英特尔公司 | Image capture feedback |
CN105892657A (en) * | 2016-03-30 | 2016-08-24 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN107729797A (en) * | 2016-08-10 | 2018-02-23 | 塔塔咨询服务有限公司 | System and method based on sensor data analysis identification positions of body joints |
CN108353127A (en) * | 2015-11-06 | 2018-07-31 | 谷歌有限责任公司 | Image stabilization based on depth camera |
Families Citing this family (112)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7989689B2 (en) * | 1996-07-10 | 2011-08-02 | Bassilic Technologies Llc | Electronic music stand performer subsystems and music communication methodologies |
US7297856B2 (en) * | 1996-07-10 | 2007-11-20 | Sitrick David H | System and methodology for coordinating musical communication and display |
ES2569411T3 (en) | 2006-05-19 | 2016-05-10 | The Queen's Medical Center | Motion tracking system for adaptive real-time imaging and spectroscopy |
US8351651B2 (en) | 2010-04-26 | 2013-01-08 | Microsoft Corporation | Hand-location post-process refinement in a tracking system |
US8639020B1 (en) | 2010-06-16 | 2014-01-28 | Intel Corporation | Method and system for modeling subjects from a depth map |
US8964004B2 (en) | 2010-06-18 | 2015-02-24 | Amchael Visual Technology Corporation | Three channel reflector imaging system |
US9789392B1 (en) * | 2010-07-09 | 2017-10-17 | Open Invention Network Llc | Action or position triggers in a game play mode |
KR20120046973A (en) * | 2010-11-03 | 2012-05-11 | 삼성전자주식회사 | Method and apparatus for generating motion information |
US9665767B2 (en) * | 2011-02-28 | 2017-05-30 | Aic Innovations Group, Inc. | Method and apparatus for pattern tracking |
KR101423536B1 (en) * | 2011-06-14 | 2014-08-01 | 한국전자통신연구원 | System for constructiing mixed reality using print medium and method therefor |
JP6074170B2 (en) | 2011-06-23 | 2017-02-01 | インテル・コーポレーション | Short range motion tracking system and method |
US11048333B2 (en) | 2011-06-23 | 2021-06-29 | Intel Corporation | System and method for close-range movement tracking |
CN102841733B (en) * | 2011-06-24 | 2015-02-18 | 株式会社理光 | Virtual touch screen system and method for automatically switching interaction modes |
RU2455676C2 (en) | 2011-07-04 | 2012-07-10 | Общество с ограниченной ответственностью "ТРИДИВИ" | Method of controlling device using gestures and 3d sensor for realising said method |
US9606209B2 (en) | 2011-08-26 | 2017-03-28 | Kineticor, Inc. | Methods, systems, and devices for intra-scan motion correction |
US9002099B2 (en) * | 2011-09-11 | 2015-04-07 | Apple Inc. | Learning-based estimation of hand and finger pose |
US8648808B2 (en) * | 2011-09-19 | 2014-02-11 | Amchael Visual Technology Corp. | Three-dimensional human-computer interaction system that supports mouse operations through the motion of a finger and an operation method thereof |
US9019352B2 (en) | 2011-11-21 | 2015-04-28 | Amchael Visual Technology Corp. | Two-parallel-channel reflector with focal length and disparity control |
US9628843B2 (en) * | 2011-11-21 | 2017-04-18 | Microsoft Technology Licensing, Llc | Methods for controlling electronic devices using gestures |
US9704027B1 (en) * | 2012-02-27 | 2017-07-11 | Amazon Technologies, Inc. | Gesture recognition |
US9019603B2 (en) | 2012-03-22 | 2015-04-28 | Amchael Visual Technology Corp. | Two-parallel-channel reflector with focal length and disparity control |
US9477303B2 (en) | 2012-04-09 | 2016-10-25 | Intel Corporation | System and method for combining three-dimensional tracking with a three-dimensional display for a user interface |
JP2013218549A (en) * | 2012-04-10 | 2013-10-24 | Alpine Electronics Inc | Electronic equipment |
US9747306B2 (en) * | 2012-05-25 | 2017-08-29 | Atheer, Inc. | Method and apparatus for identifying input features for later recognition |
US20140007115A1 (en) * | 2012-06-29 | 2014-01-02 | Ning Lu | Multi-modal behavior awareness for human natural command control |
US9557634B2 (en) | 2012-07-05 | 2017-01-31 | Amchael Visual Technology Corporation | Two-channel reflector based single-lens 2D/3D camera with disparity and convergence angle control |
US9208580B2 (en) * | 2012-08-23 | 2015-12-08 | Qualcomm Incorporated | Hand detection, location, and/or tracking |
TWI496090B (en) | 2012-09-05 | 2015-08-11 | Ind Tech Res Inst | Method and apparatus for object positioning by using depth images |
TWI590099B (en) * | 2012-09-27 | 2017-07-01 | 緯創資通股份有限公司 | Interaction system and motion detection method |
US9201500B2 (en) * | 2012-09-28 | 2015-12-01 | Intel Corporation | Multi-modal touch screen emulator |
US9081413B2 (en) * | 2012-11-20 | 2015-07-14 | 3M Innovative Properties Company | Human interaction system based upon real-time intention detection |
JP2014123189A (en) * | 2012-12-20 | 2014-07-03 | Toshiba Corp | Object detector, method and program |
US9717461B2 (en) | 2013-01-24 | 2017-08-01 | Kineticor, Inc. | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
US10327708B2 (en) | 2013-01-24 | 2019-06-25 | Kineticor, Inc. | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
US9305365B2 (en) | 2013-01-24 | 2016-04-05 | Kineticor, Inc. | Systems, devices, and methods for tracking moving targets |
CN105392423B (en) | 2013-02-01 | 2018-08-17 | 凯内蒂科尔股份有限公司 | The motion tracking system of real-time adaptive motion compensation in biomedical imaging |
WO2014131197A1 (en) * | 2013-03-01 | 2014-09-04 | Microsoft Corporation | Object creation using body gestures |
US20140267611A1 (en) * | 2013-03-14 | 2014-09-18 | Microsoft Corporation | Runtime engine for analyzing user motion in 3d images |
DE102013207223A1 (en) * | 2013-04-22 | 2014-10-23 | Ford Global Technologies, Llc | Method for detecting non-motorized road users |
US9449392B2 (en) | 2013-06-05 | 2016-09-20 | Samsung Electronics Co., Ltd. | Estimator training method and pose estimating method using depth image |
US9144744B2 (en) | 2013-06-10 | 2015-09-29 | Microsoft Corporation | Locating and orienting device in space |
US20150124566A1 (en) | 2013-10-04 | 2015-05-07 | Thalmic Labs Inc. | Systems, articles and methods for wearable electronic devices employing contact sensors |
US10042422B2 (en) | 2013-11-12 | 2018-08-07 | Thalmic Labs Inc. | Systems, articles, and methods for capacitive electromyography sensors |
US11921471B2 (en) | 2013-08-16 | 2024-03-05 | Meta Platforms Technologies, Llc | Systems, articles, and methods for wearable devices having secondary power sources in links of a band for providing secondary power in addition to a primary power source |
US20150054820A1 (en) * | 2013-08-22 | 2015-02-26 | Sony Corporation | Natural user interface system with calibration and method of operation thereof |
US9721383B1 (en) * | 2013-08-29 | 2017-08-01 | Leap Motion, Inc. | Predictive information for free space gesture control and communication |
TW201510771A (en) * | 2013-09-05 | 2015-03-16 | Utechzone Co Ltd | Pointing direction detecting device and its method, program and computer readable-medium |
KR101953960B1 (en) | 2013-10-07 | 2019-03-04 | 애플 인크. | Method and system for providing position or movement information for controlling at least one function of a vehicle |
US10937187B2 (en) | 2013-10-07 | 2021-03-02 | Apple Inc. | Method and system for providing position or movement information for controlling at least one function of an environment |
WO2015081113A1 (en) | 2013-11-27 | 2015-06-04 | Cezar Morun | Systems, articles, and methods for electromyography sensors |
FR3015730B1 (en) * | 2013-12-20 | 2017-07-21 | Thales Sa | METHOD FOR DETECTING PEOPLE AND OR OBJECTS IN A SPACE |
WO2015122079A1 (en) * | 2014-02-14 | 2015-08-20 | 株式会社ソニー・コンピュータエンタテインメント | Information processing device and information processing method |
US10004462B2 (en) | 2014-03-24 | 2018-06-26 | Kineticor, Inc. | Systems, methods, and devices for removing prospective motion correction from medical imaging scans |
KR20150144179A (en) | 2014-06-16 | 2015-12-24 | 삼성전자주식회사 | The Method and Apparatus of Object Part Position Estimation |
US9880632B2 (en) | 2014-06-19 | 2018-01-30 | Thalmic Labs Inc. | Systems, devices, and methods for gesture identification |
US10368784B2 (en) * | 2014-06-23 | 2019-08-06 | Microsoft Technology Licensing, Llc | Sensor data damping |
WO2016014718A1 (en) | 2014-07-23 | 2016-01-28 | Kineticor, Inc. | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
US9811721B2 (en) | 2014-08-15 | 2017-11-07 | Apple Inc. | Three-dimensional hand tracking using depth sequences |
CN104376100B (en) * | 2014-11-25 | 2018-12-18 | 北京智谷睿拓技术服务有限公司 | searching method and device |
US11119565B2 (en) | 2015-01-19 | 2021-09-14 | Samsung Electronics Company, Ltd. | Optical detection and analysis of bone |
US9811165B2 (en) | 2015-03-11 | 2017-11-07 | Samsung Electronics Co., Ltd. | Electronic system with gesture processing mechanism and method of operation thereof |
WO2016205182A1 (en) * | 2015-06-15 | 2016-12-22 | Survios, Inc. | Systems and methods for immersive physical interaction with a virtual environment |
US9943247B2 (en) | 2015-07-28 | 2018-04-17 | The University Of Hawai'i | Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan |
US10057078B2 (en) | 2015-08-21 | 2018-08-21 | Samsung Electronics Company, Ltd. | User-configurable interactive region monitoring |
US9703387B2 (en) * | 2015-08-31 | 2017-07-11 | Konica Minolta Laboratory U.S.A., Inc. | System and method of real-time interactive operation of user interface |
US10048765B2 (en) | 2015-09-25 | 2018-08-14 | Apple Inc. | Multi media computing or entertainment system for responding to user presence and activity |
WO2017091479A1 (en) | 2015-11-23 | 2017-06-01 | Kineticor, Inc. | Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan |
US11511156B2 (en) | 2016-03-12 | 2022-11-29 | Arie Shavit | Training system and methods for designing, monitoring and providing feedback of training |
US11000211B2 (en) | 2016-07-25 | 2021-05-11 | Facebook Technologies, Llc | Adaptive system for deriving control signals from measurements of neuromuscular activity |
US11331045B1 (en) | 2018-01-25 | 2022-05-17 | Facebook Technologies, Llc | Systems and methods for mitigating neuromuscular signal artifacts |
US10990174B2 (en) | 2016-07-25 | 2021-04-27 | Facebook Technologies, Llc | Methods and apparatus for predicting musculo-skeletal position information using wearable autonomous sensors |
US11216069B2 (en) | 2018-05-08 | 2022-01-04 | Facebook Technologies, Llc | Systems and methods for improved speech recognition using neuromuscular information |
EP3487595A4 (en) | 2016-07-25 | 2019-12-25 | CTRL-Labs Corporation | System and method for measuring the movements of articulated rigid bodies |
US10772519B2 (en) | 2018-05-25 | 2020-09-15 | Facebook Technologies, Llc | Methods and apparatus for providing sub-muscular control |
CN110337269B (en) | 2016-07-25 | 2021-09-21 | 脸谱科技有限责任公司 | Method and apparatus for inferring user intent based on neuromuscular signals |
US10489986B2 (en) | 2018-01-25 | 2019-11-26 | Ctrl-Labs Corporation | User-controlled tuning of handstate representation model parameters |
US9958951B1 (en) | 2016-09-12 | 2018-05-01 | Meta Company | System and method for providing views of virtual content in an augmented reality environment |
US20180241927A1 (en) * | 2017-02-23 | 2018-08-23 | Motorola Mobility Llc | Exposure Metering Based On Depth Map |
US10540778B2 (en) * | 2017-06-30 | 2020-01-21 | Intel Corporation | System for determining anatomical feature orientation |
CN107592449B (en) * | 2017-08-09 | 2020-05-19 | Oppo广东移动通信有限公司 | Three-dimensional model establishing method and device and mobile terminal |
CN112040858A (en) | 2017-10-19 | 2020-12-04 | 脸谱科技有限责任公司 | System and method for identifying biological structures associated with neuromuscular source signals |
US10701247B1 (en) | 2017-10-23 | 2020-06-30 | Meta View, Inc. | Systems and methods to simulate physical objects occluding virtual objects in an interactive space |
US10229313B1 (en) | 2017-10-23 | 2019-03-12 | Meta Company | System and method for identifying and tracking a human hand in an interactive space based on approximated center-lines of digits |
US11069148B2 (en) | 2018-01-25 | 2021-07-20 | Facebook Technologies, Llc | Visualization of reconstructed handstate information |
US11493993B2 (en) | 2019-09-04 | 2022-11-08 | Meta Platforms Technologies, Llc | Systems, methods, and interfaces for performing inputs based on neuromuscular control |
US11961494B1 (en) | 2019-03-29 | 2024-04-16 | Meta Platforms Technologies, Llc | Electromagnetic interference reduction in extended reality environments |
US10970936B2 (en) | 2018-10-05 | 2021-04-06 | Facebook Technologies, Llc | Use of neuromuscular signals to provide enhanced interactions with physical objects in an augmented reality environment |
WO2019147949A1 (en) | 2018-01-25 | 2019-08-01 | Ctrl-Labs Corporation | Real-time processing of handstate representation model estimates |
US11567573B2 (en) | 2018-09-20 | 2023-01-31 | Meta Platforms Technologies, Llc | Neuromuscular text entry, writing and drawing in augmented reality systems |
US11150730B1 (en) | 2019-04-30 | 2021-10-19 | Facebook Technologies, Llc | Devices, systems, and methods for controlling computing devices via neuromuscular signals of users |
WO2019147996A1 (en) | 2018-01-25 | 2019-08-01 | Ctrl-Labs Corporation | Calibration techniques for handstate representation modeling using neuromuscular signals |
US11481030B2 (en) | 2019-03-29 | 2022-10-25 | Meta Platforms Technologies, Llc | Methods and apparatus for gesture detection and classification |
WO2019148002A1 (en) | 2018-01-25 | 2019-08-01 | Ctrl-Labs Corporation | Techniques for anonymizing neuromuscular signal data |
US11907423B2 (en) | 2019-11-25 | 2024-02-20 | Meta Platforms Technologies, Llc | Systems and methods for contextualized interactions with an environment |
US10937414B2 (en) | 2018-05-08 | 2021-03-02 | Facebook Technologies, Llc | Systems and methods for text input using neuromuscular information |
WO2019147928A1 (en) | 2018-01-25 | 2019-08-01 | Ctrl-Labs Corporation | Handstate reconstruction based on multiple inputs |
US10592001B2 (en) | 2018-05-08 | 2020-03-17 | Facebook Technologies, Llc | Systems and methods for improved speech recognition using neuromuscular information |
EP3801216A4 (en) | 2018-05-29 | 2021-04-14 | Facebook Technologies, LLC. | Shielding techniques for noise reduction in surface electromyography signal measurement and related systems and methods |
WO2019241701A1 (en) | 2018-06-14 | 2019-12-19 | Ctrl-Labs Corporation | User identification and authentication with neuromuscular signatures |
US11045137B2 (en) | 2018-07-19 | 2021-06-29 | Facebook Technologies, Llc | Methods and apparatus for improved signal robustness for a wearable neuromuscular recording device |
EP3836836B1 (en) | 2018-08-13 | 2024-03-20 | Meta Platforms Technologies, LLC | Real-time spike detection and identification |
CN112996430A (en) | 2018-08-31 | 2021-06-18 | 脸谱科技有限责任公司 | Camera-guided interpretation of neuromuscular signals |
CN112771478A (en) | 2018-09-26 | 2021-05-07 | 脸谱科技有限责任公司 | Neuromuscular control of physical objects in an environment |
CN111091592B (en) * | 2018-10-24 | 2023-08-15 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, electronic device, and readable storage medium |
EP3886693A4 (en) | 2018-11-27 | 2022-06-08 | Facebook Technologies, LLC. | Methods and apparatus for autocalibration of a wearable electrode sensor system |
US10659190B1 (en) * | 2019-02-25 | 2020-05-19 | At&T Intellectual Property I, L.P. | Optimizing delay-sensitive network-based communications with latency guidance |
US10905383B2 (en) | 2019-02-28 | 2021-02-02 | Facebook Technologies, Llc | Methods and apparatus for unsupervised one-shot machine learning for classification of human gestures and estimation of applied forces |
US11587361B2 (en) * | 2019-11-08 | 2023-02-21 | Wisconsin Alumni Research Foundation | Movement monitoring system |
US11036989B1 (en) * | 2019-12-11 | 2021-06-15 | Snap Inc. | Skeletal tracking using previous frames |
DE102019134253A1 (en) * | 2019-12-13 | 2021-06-17 | Hoya Corporation | Apparatus, method and computer readable storage medium for detecting objects in a video signal based on visual cues using an output of a machine learning model |
US11763527B2 (en) * | 2020-12-31 | 2023-09-19 | Oberon Technologies, Inc. | Systems and methods for providing virtual reality environment-based training and certification |
US11868531B1 (en) | 2021-04-08 | 2024-01-09 | Meta Platforms Technologies, Llc | Wearable device providing for thumb-to-finger-based input gestures detected based on neuromuscular signals, and systems and methods of use thereof |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1722071A (en) * | 2004-07-15 | 2006-01-18 | 微软公司 | Methods and apparatuses for compound tracking systems |
CN1740950A (en) * | 2005-07-21 | 2006-03-01 | 高春平 | Module type non-hand operated control method and apparatus thereof |
US20080071481A1 (en) * | 2006-08-14 | 2008-03-20 | Algreatly Cherif A | Motion tracking apparatus and technique |
CN101305401A (en) * | 2005-11-14 | 2008-11-12 | 微软公司 | Stereo video for gaming |
CN101547344A (en) * | 2009-04-24 | 2009-09-30 | 清华大学深圳研究生院 | Video monitoring device and tracking and recording method based on linkage camera |
Family Cites Families (170)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4695953A (en) | 1983-08-25 | 1987-09-22 | Blair Preston E | TV animation interactively controlled by the viewer |
US4630910A (en) | 1984-02-16 | 1986-12-23 | Robotic Vision Systems, Inc. | Method of measuring in three-dimensions at high speed |
US4627620A (en) | 1984-12-26 | 1986-12-09 | Yang John P | Electronic athlete trainer for improving skills in reflex, speed and accuracy |
US4645458A (en) | 1985-04-15 | 1987-02-24 | Harald Phillip | Athletic evaluation and training apparatus |
US4702475A (en) | 1985-08-16 | 1987-10-27 | Innovating Training Products, Inc. | Sports technique and reaction training system |
US4843568A (en) | 1986-04-11 | 1989-06-27 | Krueger Myron W | Real time perception of and response to the actions of an unencumbered participant/user |
US4711543A (en) | 1986-04-14 | 1987-12-08 | Blair Preston E | TV animation interactively controlled by the viewer |
US4796997A (en) | 1986-05-27 | 1989-01-10 | Synthetic Vision Systems, Inc. | Method and system for high-speed, 3-D imaging of an object at a vision station |
US5184295A (en) | 1986-05-30 | 1993-02-02 | Mann Ralph V | System and method for teaching physical skills |
US4751642A (en) | 1986-08-29 | 1988-06-14 | Silva John M | Interactive sports simulation system with physiological sensing and psychological conditioning |
US4809065A (en) | 1986-12-01 | 1989-02-28 | Kabushiki Kaisha Toshiba | Interactive system and related method for displaying data to produce a three-dimensional image of an object |
US4817950A (en) | 1987-05-08 | 1989-04-04 | Goo Paul E | Video game control unit and attitude sensor |
US5239464A (en) | 1988-08-04 | 1993-08-24 | Blair Preston E | Interactive video system providing repeated switching of multiple tracks of actions sequences |
US5239463A (en) | 1988-08-04 | 1993-08-24 | Blair Preston E | Method and apparatus for player interaction with animated characters and objects |
US4901362A (en) | 1988-08-08 | 1990-02-13 | Raytheon Company | Method of recognizing patterns |
US4893183A (en) | 1988-08-11 | 1990-01-09 | Carnegie-Mellon University | Robotic vision system |
JPH02199526A (en) | 1988-10-14 | 1990-08-07 | David G Capper | Control interface apparatus |
US4925189A (en) | 1989-01-13 | 1990-05-15 | Braeunig Thomas F | Body-mounted video game exercise device |
US5229756A (en) | 1989-02-07 | 1993-07-20 | Yamaha Corporation | Image control apparatus |
US5469740A (en) | 1989-07-14 | 1995-11-28 | Impulse Technology, Inc. | Interactive video testing and training system |
JPH03103822U (en) | 1990-02-13 | 1991-10-29 | ||
US5101444A (en) | 1990-05-18 | 1992-03-31 | Panacea, Inc. | Method and apparatus for high speed object location |
US5148154A (en) | 1990-12-04 | 1992-09-15 | Sony Corporation Of America | Multi-dimensional user interface |
US5534917A (en) | 1991-05-09 | 1996-07-09 | Very Vivid, Inc. | Video image based control system |
US5417210A (en) | 1992-05-27 | 1995-05-23 | International Business Machines Corporation | System and method for augmentation of endoscopic surgery |
US5295491A (en) | 1991-09-26 | 1994-03-22 | Sam Technology, Inc. | Non-invasive human neurocognitive performance capability testing method and system |
US6054991A (en) | 1991-12-02 | 2000-04-25 | Texas Instruments Incorporated | Method of modeling player position and movement in a virtual reality system |
CA2101633A1 (en) | 1991-12-03 | 1993-06-04 | Barry J. French | Interactive video testing and training system |
US5875108A (en) | 1991-12-23 | 1999-02-23 | Hoffberg; Steven M. | Ergonomic man-machine interface incorporating adaptive pattern recognition based control system |
JPH07325934A (en) | 1992-07-10 | 1995-12-12 | Walt Disney Co:The | Method and equipment for provision of graphics enhanced to virtual world |
US5999908A (en) | 1992-08-06 | 1999-12-07 | Abelow; Daniel H. | Customer-based product design module |
US5320538A (en) | 1992-09-23 | 1994-06-14 | Hughes Training, Inc. | Interactive aircraft training system and method |
IT1257294B (en) | 1992-11-20 | 1996-01-12 | DEVICE SUITABLE TO DETECT THE CONFIGURATION OF A PHYSIOLOGICAL-DISTAL UNIT, TO BE USED IN PARTICULAR AS AN ADVANCED INTERFACE FOR MACHINES AND CALCULATORS. | |
US5495576A (en) | 1993-01-11 | 1996-02-27 | Ritchey; Kurtis J. | Panoramic image based virtual reality/telepresence audio-visual system and method |
US5690582A (en) | 1993-02-02 | 1997-11-25 | Tectrix Fitness Equipment, Inc. | Interactive exercise apparatus |
JP2799126B2 (en) | 1993-03-26 | 1998-09-17 | 株式会社ナムコ | Video game device and game input device |
US5405152A (en) | 1993-06-08 | 1995-04-11 | The Walt Disney Company | Method and apparatus for an interactive video game with physical feedback |
US5454043A (en) | 1993-07-30 | 1995-09-26 | Mitsubishi Electric Research Laboratories, Inc. | Dynamic and static hand gesture recognition through low-level image analysis |
US5423554A (en) | 1993-09-24 | 1995-06-13 | Metamedia Ventures, Inc. | Virtual reality game method and apparatus |
US5980256A (en) | 1993-10-29 | 1999-11-09 | Carmein; David E. E. | Virtual reality system with enhanced sensory apparatus |
JP3419050B2 (en) | 1993-11-19 | 2003-06-23 | 株式会社日立製作所 | Input device |
US5347306A (en) | 1993-12-17 | 1994-09-13 | Mitsubishi Electric Research Laboratories, Inc. | Animated electronic meeting place |
JP2552427B2 (en) | 1993-12-28 | 1996-11-13 | コナミ株式会社 | Tv play system |
US5577981A (en) | 1994-01-19 | 1996-11-26 | Jarvik; Robert | Virtual reality exercise machine and computer controlled video system |
US5580249A (en) | 1994-02-14 | 1996-12-03 | Sarcos Group | Apparatus for simulating mobility of a human |
US5597309A (en) | 1994-03-28 | 1997-01-28 | Riess; Thomas | Method and apparatus for treatment of gait problems associated with parkinson's disease |
US5385519A (en) | 1994-04-19 | 1995-01-31 | Hsu; Chi-Hsueh | Running machine |
US5524637A (en) | 1994-06-29 | 1996-06-11 | Erickson; Jon W. | Interactive system for measuring physiological exertion |
US5563988A (en) | 1994-08-01 | 1996-10-08 | Massachusetts Institute Of Technology | Method and system for facilitating wireless, full-body, real-time user interaction with a digitally represented visual environment |
US6714665B1 (en) | 1994-09-02 | 2004-03-30 | Sarnoff Corporation | Fully automated iris recognition system utilizing wide and narrow fields of view |
US5516105A (en) | 1994-10-06 | 1996-05-14 | Exergame, Inc. | Acceleration activated joystick |
US5638300A (en) | 1994-12-05 | 1997-06-10 | Johnson; Lee E. | Golf swing analysis system |
JPH08161292A (en) | 1994-12-09 | 1996-06-21 | Matsushita Electric Ind Co Ltd | Method and system for detecting congestion degree |
US5594469A (en) | 1995-02-21 | 1997-01-14 | Mitsubishi Electric Information Technology Center America Inc. | Hand gesture machine control system |
US5682229A (en) | 1995-04-14 | 1997-10-28 | Schwartz Electro-Optics, Inc. | Laser range camera |
US5913727A (en) | 1995-06-02 | 1999-06-22 | Ahdoot; Ned | Interactive movement and contact simulation game |
US6229913B1 (en) | 1995-06-07 | 2001-05-08 | The Trustees Of Columbia University In The City Of New York | Apparatus and methods for determining the three-dimensional shape of an object using active illumination and relative blurring in two-images due to defocus |
US5682196A (en) | 1995-06-22 | 1997-10-28 | Actv, Inc. | Three-dimensional (3D) video presentation system providing interactive 3D presentation with personalized audio responses for multiple viewers |
US5702323A (en) | 1995-07-26 | 1997-12-30 | Poulton; Craig K. | Electronic exercise enhancer |
US6308565B1 (en) | 1995-11-06 | 2001-10-30 | Impulse Technology Ltd. | System and method for tracking and assessing movement skills in multidimensional space |
US6430997B1 (en) | 1995-11-06 | 2002-08-13 | Trazer Technologies, Inc. | System and method for tracking and assessing movement skills in multidimensional space |
US6073489A (en) | 1995-11-06 | 2000-06-13 | French; Barry J. | Testing and training system for assessing the ability of a player to complete a task |
US6098458A (en) | 1995-11-06 | 2000-08-08 | Impulse Technology, Ltd. | Testing and training system for assessing movement and agility skills without a confining field |
US6176782B1 (en) | 1997-12-22 | 2001-01-23 | Philips Electronics North America Corp. | Motion-based command generation technology |
US5933125A (en) | 1995-11-27 | 1999-08-03 | Cae Electronics, Ltd. | Method and apparatus for reducing instability in the display of a virtual environment |
US5802220A (en) | 1995-12-15 | 1998-09-01 | Xerox Corporation | Apparatus and method for tracking facial motion through a sequence of images |
US5641288A (en) | 1996-01-11 | 1997-06-24 | Zaenglein, Jr.; William G. | Shooting simulating process and training device using a virtual reality display screen |
US6152856A (en) | 1996-05-08 | 2000-11-28 | Real Vision Corporation | Real time simulation using position sensing |
US6173066B1 (en) | 1996-05-21 | 2001-01-09 | Cybernet Systems Corporation | Pose determination and tracking by matching 3D objects to a 2D sensor |
US5989157A (en) | 1996-08-06 | 1999-11-23 | Walton; Charles A. | Exercising system with electronic inertial game playing |
CN1168057C (en) | 1996-08-14 | 2004-09-22 | 挪拉赫梅特·挪利斯拉莫维奇·拉都包夫 | Method for following and imaging a subject's three-dimensional position and orientation, method for presenting a virtual space to a subject,and systems for implementing said methods |
JP3064928B2 (en) | 1996-09-20 | 2000-07-12 | 日本電気株式会社 | Subject extraction method |
DE69626208T2 (en) | 1996-12-20 | 2003-11-13 | Hitachi Europ Ltd | Method and system for recognizing hand gestures |
US6009210A (en) | 1997-03-05 | 1999-12-28 | Digital Equipment Corporation | Hands-free interface to a virtual reality environment using head tracking |
US6100896A (en) | 1997-03-24 | 2000-08-08 | Mitsubishi Electric Information Technology Center America, Inc. | System for designing graphical multi-participant environments |
US5877803A (en) | 1997-04-07 | 1999-03-02 | Tritech Mircoelectronics International, Ltd. | 3-D image detector |
US6215898B1 (en) | 1997-04-15 | 2001-04-10 | Interval Research Corporation | Data processing system and method |
JP3077745B2 (en) | 1997-07-31 | 2000-08-14 | 日本電気株式会社 | Data processing method and apparatus, information storage medium |
US6188777B1 (en) | 1997-08-01 | 2001-02-13 | Interval Research Corporation | Method and apparatus for personnel detection and tracking |
US6289112B1 (en) | 1997-08-22 | 2001-09-11 | International Business Machines Corporation | System and method for determining block direction in fingerprint images |
US6720949B1 (en) | 1997-08-22 | 2004-04-13 | Timothy R. Pryor | Man machine interfaces and applications |
AUPO894497A0 (en) | 1997-09-02 | 1997-09-25 | Xenotech Research Pty Ltd | Image processing method and apparatus |
EP0905644A3 (en) | 1997-09-26 | 2004-02-25 | Matsushita Electric Industrial Co., Ltd. | Hand gesture recognizing device |
US6141463A (en) | 1997-10-10 | 2000-10-31 | Electric Planet Interactive | Method and system for estimating jointed-figure configurations |
AU9808298A (en) | 1997-10-15 | 1999-05-03 | Electric Planet, Inc. | A system and method for generating an animatable character |
US6072494A (en) | 1997-10-15 | 2000-06-06 | Electric Planet, Inc. | Method and apparatus for real-time gesture recognition |
WO1999019828A1 (en) | 1997-10-15 | 1999-04-22 | Electric Planet, Inc. | Method and apparatus for performing a clean background subtraction |
US6130677A (en) | 1997-10-15 | 2000-10-10 | Electric Planet, Inc. | Interactive computer vision system |
US6101289A (en) | 1997-10-15 | 2000-08-08 | Electric Planet, Inc. | Method and apparatus for unencumbered capture of an object |
US6181343B1 (en) | 1997-12-23 | 2001-01-30 | Philips Electronics North America Corp. | System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs |
US6159100A (en) | 1998-04-23 | 2000-12-12 | Smith; Michael D. | Virtual reality game |
US6077201A (en) | 1998-06-12 | 2000-06-20 | Cheng; Chau-Yang | Exercise bicycle |
US20010008561A1 (en) | 1999-08-10 | 2001-07-19 | Paul George V. | Real-time object tracking system |
US7121946B2 (en) | 1998-08-10 | 2006-10-17 | Cybernet Systems Corporation | Real-time head tracking system for computer games and other applications |
US6950534B2 (en) | 1998-08-10 | 2005-09-27 | Cybernet Systems Corporation | Gesture-controlled interfaces for self-service machines and other applications |
US6801637B2 (en) | 1999-08-10 | 2004-10-05 | Cybernet Systems Corporation | Optical body tracker |
US6681031B2 (en) | 1998-08-10 | 2004-01-20 | Cybernet Systems Corporation | Gesture-controlled interfaces for self-service machines and other applications |
US7036094B1 (en) * | 1998-08-10 | 2006-04-25 | Cybernet Systems Corporation | Behavior recognition system |
IL126284A (en) | 1998-09-17 | 2002-12-01 | Netmor Ltd | System and method for three dimensional positioning and tracking |
EP0991011B1 (en) | 1998-09-28 | 2007-07-25 | Matsushita Electric Industrial Co., Ltd. | Method and device for segmenting hand gestures |
AU1930700A (en) | 1998-12-04 | 2000-06-26 | Interval Research Corporation | Background estimation and segmentation based on range and color |
US6147678A (en) | 1998-12-09 | 2000-11-14 | Lucent Technologies Inc. | Video hand image-three-dimensional computer interface with multiple degrees of freedom |
AU1574899A (en) | 1998-12-16 | 2000-07-03 | 3Dv Systems Ltd. | Self gating photosurface |
US6570555B1 (en) | 1998-12-30 | 2003-05-27 | Fuji Xerox Co., Ltd. | Method and apparatus for embodied conversational characters with multimodal input/output in an interface device |
US6363160B1 (en) | 1999-01-22 | 2002-03-26 | Intel Corporation | Interface using pattern recognition and tracking |
US7003134B1 (en) | 1999-03-08 | 2006-02-21 | Vulcan Patents Llc | Three dimensional object pose estimation which employs dense depth information |
US6299308B1 (en) | 1999-04-02 | 2001-10-09 | Cybernet Systems Corporation | Low-cost non-imaging eye tracker system for computer control |
US6503195B1 (en) | 1999-05-24 | 2003-01-07 | University Of North Carolina At Chapel Hill | Methods and systems for real-time structured light depth extraction and endoscope using real-time structured light depth extraction |
US6476834B1 (en) | 1999-05-28 | 2002-11-05 | International Business Machines Corporation | Dynamic creation of selectable items on surfaces |
JP4332649B2 (en) * | 1999-06-08 | 2009-09-16 | 独立行政法人情報通信研究機構 | Hand shape and posture recognition device, hand shape and posture recognition method, and recording medium storing a program for executing the method |
US6873723B1 (en) | 1999-06-30 | 2005-03-29 | Intel Corporation | Segmenting three-dimensional video images using stereo |
US6738066B1 (en) | 1999-07-30 | 2004-05-18 | Electric Plant, Inc. | System, method and article of manufacture for detecting collisions between video images generated by a camera and an object depicted on a display |
US7113918B1 (en) | 1999-08-01 | 2006-09-26 | Electric Planet, Inc. | Method for video enabled electronic commerce |
US7050606B2 (en) | 1999-08-10 | 2006-05-23 | Cybernet Systems Corporation | Tracking and gesture recognition system particularly suited to vehicular control applications |
US6674877B1 (en) | 2000-02-03 | 2004-01-06 | Microsoft Corporation | System and method for visually tracking occluded objects in real time |
US6663491B2 (en) | 2000-02-18 | 2003-12-16 | Namco Ltd. | Game apparatus, storage medium and computer program that adjust tempo of sound |
US6633294B1 (en) | 2000-03-09 | 2003-10-14 | Seth Rosenthal | Method and apparatus for using captured high density motion for animation |
SE0000850D0 (en) | 2000-03-13 | 2000-03-13 | Pink Solution Ab | Recognition arrangement |
EP1152261A1 (en) | 2000-04-28 | 2001-11-07 | CSEM Centre Suisse d'Electronique et de Microtechnique SA | Device and method for spatially resolved photodetection and demodulation of modulated electromagnetic waves |
US6640202B1 (en) | 2000-05-25 | 2003-10-28 | International Business Machines Corporation | Elastic sensor mesh system for 3-dimensional measurement, mapping and kinematics applications |
US6731799B1 (en) | 2000-06-01 | 2004-05-04 | University Of Washington | Object segmentation with background extraction and moving boundary techniques |
US6788809B1 (en) | 2000-06-30 | 2004-09-07 | Intel Corporation | System and method for gesture recognition in three dimensions using stereo imaging and color vision |
US7227526B2 (en) * | 2000-07-24 | 2007-06-05 | Gesturetek, Inc. | Video-based image control system |
US7058204B2 (en) * | 2000-10-03 | 2006-06-06 | Gesturetek, Inc. | Multiple camera control system |
US7039676B1 (en) | 2000-10-31 | 2006-05-02 | International Business Machines Corporation | Using video image analysis to automatically transmit gestures over a network in a chat or instant messaging session |
US6539931B2 (en) | 2001-04-16 | 2003-04-01 | Koninklijke Philips Electronics N.V. | Ball throwing assistant |
US7259747B2 (en) | 2001-06-05 | 2007-08-21 | Reactrix Systems, Inc. | Interactive video display system |
US8035612B2 (en) | 2002-05-28 | 2011-10-11 | Intellectual Ventures Holding 67 Llc | Self-contained interactive video display system |
JP3420221B2 (en) | 2001-06-29 | 2003-06-23 | 株式会社コナミコンピュータエンタテインメント東京 | GAME DEVICE AND PROGRAM |
US7274800B2 (en) | 2001-07-18 | 2007-09-25 | Intel Corporation | Dynamic gesture recognition from stereo sequences |
US6937742B2 (en) | 2001-09-28 | 2005-08-30 | Bellsouth Intellectual Property Corporation | Gesture activated home appliance |
ATE321689T1 (en) | 2002-04-19 | 2006-04-15 | Iee Sarl | SAFETY DEVICE FOR A VEHICLE |
US7348963B2 (en) | 2002-05-28 | 2008-03-25 | Reactrix Systems, Inc. | Interactive video display system |
US7710391B2 (en) | 2002-05-28 | 2010-05-04 | Matthew Bell | Processing an image utilizing a spatially varying pattern |
US7170492B2 (en) | 2002-05-28 | 2007-01-30 | Reactrix Systems, Inc. | Interactive video display system |
US7489812B2 (en) | 2002-06-07 | 2009-02-10 | Dynamic Digital Depth Research Pty Ltd. | Conversion and encoding techniques |
US7576727B2 (en) | 2002-12-13 | 2009-08-18 | Matthew Bell | Interactive directed light/sound system |
JP4235729B2 (en) | 2003-02-03 | 2009-03-11 | 国立大学法人静岡大学 | Distance image sensor |
US7665041B2 (en) * | 2003-03-25 | 2010-02-16 | Microsoft Corporation | Architecture for controlling a computer using hand gestures |
DE602004006190T8 (en) | 2003-03-31 | 2008-04-10 | Honda Motor Co., Ltd. | Device, method and program for gesture recognition |
US8072470B2 (en) | 2003-05-29 | 2011-12-06 | Sony Computer Entertainment Inc. | System and method for providing a real-time three-dimensional interactive environment |
WO2004107266A1 (en) * | 2003-05-29 | 2004-12-09 | Honda Motor Co., Ltd. | Visual tracking using depth data |
JP4546956B2 (en) | 2003-06-12 | 2010-09-22 | 本田技研工業株式会社 | Target orientation estimation using depth detection |
US7536032B2 (en) | 2003-10-24 | 2009-05-19 | Reactrix Systems, Inc. | Method and system for processing captured image information in an interactive video display system |
CN100573548C (en) | 2004-04-15 | 2009-12-23 | 格斯图尔泰克股份有限公司 | The method and apparatus of tracking bimanual movements |
US7308112B2 (en) | 2004-05-14 | 2007-12-11 | Honda Motor Co., Ltd. | Sign based human-machine interaction |
US7704135B2 (en) | 2004-08-23 | 2010-04-27 | Harrison Jr Shelton E | Integrated game system, method, and device |
KR20060070280A (en) | 2004-12-20 | 2006-06-23 | 한국전자통신연구원 | Apparatus and its method of user interface using hand gesture recognition |
HUE049974T2 (en) | 2005-01-07 | 2020-11-30 | Qualcomm Inc | Detecting and tracking objects in images |
WO2006074310A2 (en) | 2005-01-07 | 2006-07-13 | Gesturetek, Inc. | Creating 3d images of objects by illuminating with infrared patterns |
CN101137996A (en) | 2005-01-07 | 2008-03-05 | 格斯图尔泰克股份有限公司 | Optical flow based tilt sensor |
EP1851750A4 (en) | 2005-02-08 | 2010-08-25 | Oblong Ind Inc | System and method for genture based control system |
JP4686595B2 (en) | 2005-03-17 | 2011-05-25 | 本田技研工業株式会社 | Pose estimation based on critical point analysis |
EP1886509B1 (en) | 2005-05-17 | 2017-01-18 | Qualcomm Incorporated | Orientation-sensitive signal output |
EP1752748B1 (en) | 2005-08-12 | 2008-10-29 | MESA Imaging AG | Highly sensitive, fast pixel for use in an image sensor |
US20080026838A1 (en) | 2005-08-22 | 2008-01-31 | Dunstan James E | Multi-player non-role-playing virtual world games: method for two-way interaction between participants and multi-player virtual world games |
US20070047834A1 (en) | 2005-08-31 | 2007-03-01 | International Business Machines Corporation | Method and apparatus for visual background subtraction with one or more preprocessing modules |
US7450736B2 (en) | 2005-10-28 | 2008-11-11 | Honda Motor Co., Ltd. | Monocular tracking of 3D human motion with a coordinated mixture of factor analyzers |
US7634108B2 (en) | 2006-02-14 | 2009-12-15 | Microsoft Corp. | Automated face enhancement |
US7701439B2 (en) | 2006-07-13 | 2010-04-20 | Northrop Grumman Corporation | Gesture recognition simulation system and method |
JP5395323B2 (en) | 2006-09-29 | 2014-01-22 | ブレインビジョン株式会社 | Solid-state image sensor |
US7412077B2 (en) | 2006-12-29 | 2008-08-12 | Motorola, Inc. | Apparatus and methods for head pose estimation and head gesture detection |
US7729530B2 (en) | 2007-03-03 | 2010-06-01 | Sergey Antonov | Method and apparatus for 3-D data input to a personal computer with a multimedia oriented operating system |
US7852262B2 (en) | 2007-08-16 | 2010-12-14 | Cybernet Systems Corporation | Wireless mobile indoor/outdoor tracking system |
CN101254344B (en) | 2008-04-18 | 2010-06-16 | 李刚 | Game device of field orientation corresponding with display screen dot array in proportion and method |
US8325978B2 (en) * | 2008-10-30 | 2012-12-04 | Nokia Corporation | Method, apparatus and computer program product for providing adaptive gesture analysis |
US8379987B2 (en) * | 2008-12-30 | 2013-02-19 | Nokia Corporation | Method, apparatus and computer program product for providing hand segmentation for gesture analysis |
US8787663B2 (en) * | 2010-03-01 | 2014-07-22 | Primesense Ltd. | Tracking body parts by combined color image and depth processing |
US8351651B2 (en) | 2010-04-26 | 2013-01-08 | Microsoft Corporation | Hand-location post-process refinement in a tracking system |
US8373654B2 (en) * | 2010-04-29 | 2013-02-12 | Acer Incorporated | Image based motion gesture recognition method and system thereof |
-
2010
- 2010-04-26 US US12/767,126 patent/US8351651B2/en active Active
-
2011
- 2011-04-25 CN CN201110112867.2A patent/CN102184009B/en active Active
-
2012
- 2012-12-18 US US13/718,494 patent/US8452051B1/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1722071A (en) * | 2004-07-15 | 2006-01-18 | 微软公司 | Methods and apparatuses for compound tracking systems |
CN1740950A (en) * | 2005-07-21 | 2006-03-01 | 高春平 | Module type non-hand operated control method and apparatus thereof |
CN101305401A (en) * | 2005-11-14 | 2008-11-12 | 微软公司 | Stereo video for gaming |
US20080071481A1 (en) * | 2006-08-14 | 2008-03-20 | Algreatly Cherif A | Motion tracking apparatus and technique |
CN101547344A (en) * | 2009-04-24 | 2009-09-30 | 清华大学深圳研究生院 | Video monitoring device and tracking and recording method based on linkage camera |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103537099A (en) * | 2012-07-09 | 2014-01-29 | 深圳泰山在线科技有限公司 | Tracking toy |
CN103537099B (en) * | 2012-07-09 | 2016-02-10 | 深圳泰山在线科技有限公司 | Tracing toy |
CN104380729A (en) * | 2012-07-31 | 2015-02-25 | 英特尔公司 | Context-driven adjustment of camera parameters |
CN103970264B (en) * | 2013-01-29 | 2016-08-31 | 纬创资通股份有限公司 | Gesture recognition and control method and device |
CN103970264A (en) * | 2013-01-29 | 2014-08-06 | 纬创资通股份有限公司 | Gesture recognition and control method and device |
CN105580051B (en) * | 2013-10-30 | 2019-05-14 | 英特尔公司 | Picture catching feedback |
CN105580051A (en) * | 2013-10-30 | 2016-05-11 | 英特尔公司 | Image capture feedback |
CN104978554A (en) * | 2014-04-08 | 2015-10-14 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN104978554B (en) * | 2014-04-08 | 2019-02-05 | 联想(北京)有限公司 | The processing method and electronic equipment of information |
CN104463906B (en) * | 2014-11-11 | 2018-09-28 | 广东中星电子有限公司 | A kind of object tracking apparatus and its tracking |
CN104463906A (en) * | 2014-11-11 | 2015-03-25 | 广东中星电子有限公司 | Object tracking device and method |
CN108353127A (en) * | 2015-11-06 | 2018-07-31 | 谷歌有限责任公司 | Image stabilization based on depth camera |
CN108353127B (en) * | 2015-11-06 | 2020-08-25 | 谷歌有限责任公司 | Image stabilization based on depth camera |
CN105892657A (en) * | 2016-03-30 | 2016-08-24 | 联想(北京)有限公司 | Information processing method and electronic equipment |
CN107729797A (en) * | 2016-08-10 | 2018-02-23 | 塔塔咨询服务有限公司 | System and method based on sensor data analysis identification positions of body joints |
CN107729797B (en) * | 2016-08-10 | 2021-04-09 | 塔塔咨询服务有限公司 | System and method for identifying body joint position based on sensor data analysis |
Also Published As
Publication number | Publication date |
---|---|
CN102184009B (en) | 2014-03-19 |
US20110262002A1 (en) | 2011-10-27 |
US20130120244A1 (en) | 2013-05-16 |
US8452051B1 (en) | 2013-05-28 |
US8351651B2 (en) | 2013-01-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102184009B (en) | Hand position post processing refinement in tracking system | |
CN102289815B (en) | Detecting motion for a multifunction sensor device | |
CN102549619B (en) | Human tracking system | |
CN102129293B (en) | Tracking groups of users in motion capture system | |
CN103608844B (en) | The full-automatic model calibration that dynamically joint connects | |
CN102665838B (en) | Methods and systems for determining and tracking extremities of a target | |
CN102194105B (en) | Proxy training data for human body tracking | |
CN102129551B (en) | Gesture detection based on joint skipping | |
CN102193624B (en) | Physical interaction zone for gesture-based user interfaces | |
CN102576466B (en) | For the system and method for trace model | |
CN102413885B (en) | Systems and methods for applying model tracking to motion capture | |
CN102596340B (en) | Systems and methods for applying animations or motions to a character | |
CN102332090B (en) | Compartmentalizing focus area within field of view | |
CN102609954B (en) | To the confirmation analysis that human object follows the tracks of | |
CN102414641B (en) | Altering view perspective within display environment | |
CN102622774B (en) | Living room film creates | |
CN102448562B (en) | Systems and methods for tracking a model | |
CN102129292B (en) | Recognizing user intent in motion capture system | |
CN102163324B (en) | Deep image de-aliasing technique | |
CN102222347B (en) | Creating range image through wave front coding | |
CN102207771A (en) | Intention deduction of users participating in motion capture system | |
US20100302253A1 (en) | Real time retargeting of skeletal data to game avatar | |
US20100302247A1 (en) | Target digitization, extraction, and tracking | |
CN102222431A (en) | Hand language translator based on machine | |
CN105229666A (en) | Motion analysis in 3D rendering |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
ASS | Succession or assignment of patent right |
Owner name: MICROSOFT TECHNOLOGY LICENSING LLC Free format text: FORMER OWNER: MICROSOFT CORP. Effective date: 20150429 |
|
C41 | Transfer of patent application or patent right or utility model | ||
TR01 | Transfer of patent right |
Effective date of registration: 20150429 Address after: Washington State Patentee after: Micro soft technique license Co., Ltd Address before: Washington State Patentee before: Microsoft Corp. |