CN102831380A - Body action identification method and system based on depth image induction - Google Patents
Body action identification method and system based on depth image induction Download PDFInfo
- Publication number
- CN102831380A CN102831380A CN201110160313XA CN201110160313A CN102831380A CN 102831380 A CN102831380 A CN 102831380A CN 201110160313X A CN201110160313X A CN 201110160313XA CN 201110160313 A CN201110160313 A CN 201110160313A CN 102831380 A CN102831380 A CN 102831380A
- Authority
- CN
- China
- Prior art keywords
- user
- depth image
- limbs
- image information
- limb action
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a body action identification method and a body action identification system based on depth image induction. The body action identification method comprises the following steps: acquiring the depth image information of a user and an environment where the user stands; extracting the body outline of the user from the background of the depth image information; respectively changing the size of each part in the skeletal framework of the human body to be adapted to the body outline of the user, and acquiring the adapted body skeletal framework of the user; tracking and extracting the data which present the movement of the body of the user in a manner adapted to the body skeletal framework; and identifying the body action of the user according to the data which present the movement of the body of the user. According to the invention, the body action of the user is further identified and tracked by establishing the skeletal system of the user, so that the problem existing in the current action induction identification solution is better solved, the body action identification efficiency is improved, and the user experience of human-computer interaction is improved.
Description
Technical field
The present invention relates to human-computer interaction technology, specifically, the present invention relates to a kind of limb action recognition methods and system based on the depth image induction.
Background technology
Because there is certain limitation in traditional human-computer interaction device such as mouse, keyboard aspect the naturality of user experience and the friendly; Human-computer interaction technology becomes very popular in recent years research field, and then has occurred increasing such as various novel human-machine interaction modes such as touch-control control, sound control, gesture control, action inductions.Be the action induction man-machine interaction mode of representative particularly with the Wii of Nintendo and the MOVE of Sony; Through various kinds of sensors equipment; Real-time completion is to limb action; The identifying of the upper limbs of saying so more specifically action, and be converted into the order that host equipment such as game host can be discerned is popular a kind of man-machine interaction mode at present.
Action induction solution with Wii is an example; Its core is MEMS (the Micro Electromechanical System that is arranged in the game paddle that special user hands; Be microelectromechanical systems) the 3-axis acceleration induction chip; When the hand-held handle of user is done action, thus 3-axis acceleration sensor then can convert user's gesture motion into digital signal through this sensor can be by system identification.The action induction solution of MOVE then is based on the principle of image recognition; Its core is the identification and the tracking of the luminous colour ball track on the game paddle that special user is handed of the camera on the display device; And the glow color of colour ball can automatically adjust according to the illumination color condition of actual environment, discerned by system high efficiency guaranteeing.
No matter be to adopt the scheme of MEMS sensor or the scheme of image recognition, all still need hand utility appliance, this still has certain restriction for user experience.Such as, will hand utility appliance when user action is excessive easily and throw, because the price of MEMS sensor handle own is also than higher, it is bigger to fall impaired economic loss; If the user is in the high light environment in the MOVE scheme, then user's action recognition rate can decline to a great extent, even can't discern user action, has a strong impact on user experience.
At present be representative with the gesture identification, the solution that can let the user not carry out man-machine interaction by any external accessory all is based on two dimensional image basically and handles and mode identification technology, and environment for use illumination condition etc. is had harsh requirement.From the limb action identifying; Traditional action recognition need move that modeling, action are cut apart, a plurality of complex steps and process such as motion analysis; Particularly for dynamic limb action; Different users can exist speed difference, track difference etc. when carrying out limb action; Thereby the modeling track that causes moving causes nonlinear wave on time shaft, and the elimination of this nonlinear wave is very difficult and complicated, so traditional limb action discrimination and recognition efficiency based on two dimensional image are generally not high enough.
On the other hand; Real user's limb action is all made under three-dimensional environment; And be that three-dimensional motion with the user is mapped as two dimensional motion and handles based on the two dimensional image process result; Be difficult to obtain real three-dimensional motion information, also just to a great extent limit the limb action that can discern rich, limited the widespread use of gesture identification equipment.
Summary of the invention
Fundamental purpose of the present invention is to overcome the weak point of prior art, discloses a kind of limb action recognition methods and system based on the depth image induction, and identification and tracking user's limb action improves the limb action recognition efficiency.
The invention discloses a kind of limb action recognition system, comprising based on the depth image induction:
Depth image information acquisition unit: the depth image information that is used to obtain user and place environment thereof;
Limbs contour extraction unit: be used for extracting user's limbs profile from the background of above-mentioned depth image information;
Standard bone adaptation unit: be used for changing respectively the size of standard skeleton framework various piece, itself and above-mentioned user's limbs profile phase are adapted to, obtain adaptive limbs bone framework corresponding to this user;
Skeleton motion tracking cell: be used in depth image information, follow the tracks of, extract the data of the motion of expressing user's limbs with the form of adaptive limbs bone framework;
Limb action recognition unit: be used for limb action according to the data identification user of the motion of above-mentioned expression user limbs.
In the limb action recognition system disclosed by the invention, said depth image information acquisition unit further comprises:
Depth image sensing unit: be used for belonging to direction emission infrared structure primary light plane, and receive and three-dimensional environment object infrared structure light that induction is returned via user and place Ambient thereof through coding to the user; With
Depth image processing unit: be used for coding and the coding of prototype structure optical plane through contrasting above-mentioned three-dimensional environment object infrared structure light, obtain depth image information.
Said limbs contour extraction unit is to extract user's limbs profile according to the motion difference analysis in the continuous depth image information.
Said standard bone adaptation unit carries out adaptive through convergent-divergent, rotation, calculating Method for Deformation to said standard skeleton framework.
Said standard skeleton framework comprises according to the interconnective head of normal human's structure, trunk, basin bone, left upper arm, left underarm, left hand, right upper arm, bottom right arm, the right hand, left thigh, left leg, left foot, right thigh, right leg and right crus of diaphragm.
The invention also discloses a kind of limb action recognition methods, comprise the steps: based on the depth image induction
A, obtain the user and the place environment depth image information;
B extracts user's limbs profile from the background of above-mentioned depth image information;
C changes the size of various piece in the standard skeleton framework respectively, and itself and above-mentioned user's limbs profile phase are adapted to, and obtains the adaptive limbs bone framework corresponding to this user;
D in depth image information, follows the tracks of, extracts the data of the motion of expressing user's limbs with the form of adaptive limbs bone framework;
E is according to the data identification user's of the motion of expressing user's limbs limb action.
Limb action recognition methods disclosed by the invention further comprises the steps: among the said step C
C1 according to the user's limbs profile that obtains among standard skeleton framework and the step B, guarantees that the correspondence position of basin bone is consistent;
C2 moves, the skeleton trunci of convergent-divergent standard skeleton framework is to proper height, guarantees that the correspondence position of head skeleton is consistent;
C3 moves, the lower limb skeletons of convergent-divergent standard skeleton framework is to correct position, guarantees that the correspondence position of foot is consistent;
C4 moves, the upper limbs bone of convergent-divergent standard skeleton framework is to correct position, guarantees that the both hands correspondence position is consistent;
C5, whether the adaptive skeleton key point of inspection contrast position is consistent with said user's limbs profile phase, if not, then get back to above-mentioned steps C1, begins again adaptive from the basin bone; If then get into next step.
Can adopt binocular vision technology or flying time technology or structure light coding technology to obtain the depth image information of user and place environment thereof in the said steps A.
Said steps A adopts the structure light coding technology to obtain the method for the depth image information of user and place environment thereof, further comprises the steps:
A1 belongs to the direction emission infrared structure primary light plane through coding to the user, and receives and three-dimensional environment object infrared structure light that induction is returned via user and place Ambient thereof;
A2, the coding through contrasting above-mentioned three-dimensional environment object infrared structure light and the coding of prototype structure optical plane obtain depth image information.
Can also comprise among the step C: C6, whether inspection limbs profile is consistent with the actual motion of whole body bone when user action.
A kind of limb action recognition methods and system disclosed by the invention based on the depth image induction; Based on depth image sensor and depth image processing unit; Can user's limbs image be separated from complex background efficiently, can rebuild the skeletal system of user's limbs, further discern and follow the tracks of user's limb action; Final user's limb action identifying of accomplishing; Thereby can solve the problem that exists in the existing action induction identification solution preferably, improve the limb action recognition efficiency, improve the man-machine interaction user experience.
Description of drawings
Fig. 1 is the circuit block diagram of an embodiment of limb action recognition system of the present invention.
Fig. 2 is the process flow diagram of an embodiment of limb action recognition methods of the present invention.
Fig. 3 is the limbs bone structural representation that limb action recognition system of the present invention adopts.
Fig. 4 is the process flow diagram of an embodiment of standard bone adaptation method of the present invention.
Embodiment
Below in conjunction with accompanying drawing and embodiment the present invention is done further explain.
The mode of obtaining picture depth information has multiple; Common binocular vision technology, flying time technology, the structure light coding technology etc. of comprising; The ground that is without loss of generality, the present invention describes the present invention with the structure light coding technology as a kind of means of obtaining picture depth information.
Be illustrated in figure 1 as the electrical structure block diagram of an embodiment of limb action recognition system of the present invention, the limb action recognition system main composition based on the depth image induction of the present invention comprises:
Depth image sensing unit: be responsible for belonging to the structured light plane of direction emission, and receive and induction belongs to the infrared structure light that Ambient is returned via the user through coding to the user.
Depth image processing unit:,, obtain the scene depth information in the structured light sensor visual range through contrast three-dimensional environment object structure light coding and original plane structure light coding according to the structure light coding know-why.
Limbs contour extraction unit: be responsible for from environmental background, extracting user's limbs profile; Be without loss of generality; Here suppose that the user partly moves with respect to other scenes in actual environment for use, so can extract user's limbs profile according to the motion difference analysis in the continuous depth image.
Standard bone adaptation unit: be responsible for carrying out automatically adaptive according to user's limbs profile that reality extracts standard skeleton framework; Comprise processes such as convergent-divergent, rotation, distortion, thereby convert standard bone framework into adapt to adaptive limbs bone frame system with active user's limbs profile phase.
Skeleton motion tracking cell: be responsible for following the tracks of the motion of adaptive limbs bone frame system;
Limb action recognition unit: be responsible for accomplishing the limb action identifying.
Depth image sensing unit among Fig. 1 and depth image processing unit adopt the structure light coding technology as the means of obtaining picture depth information exactly.If adopt binocular vision technology or flying time technology, adopt corresponding depth image sensor unit, carry out corresponding depth image treatment technology again, can obtain picture depth information too.
Be illustrated in figure 2 as the process flow diagram of an embodiment of limb action recognition methods of the present invention, key step comprises:
1, the depth image sensing unit obtains the depth image data;
2, the depth image processing unit obtains the scene depth information in the visual range, passes to limbs contour extraction unit, extracts dynamic limbs profile;
3, it is adaptive that standard bone adaptation unit carries out the standard skeleton, obtains the adaptive limbs bone framework that adapts to active user's limbs profile phase;
4, adaptive limbs bone framework is carried out the bone motion tracking;
5, limbs work recognition unit carries out limb action identification.
As shown in Figure 3 is limb action recognition system accepted standard limbs bone structural representation of the present invention, comprising: head, trunk, basin bone, left upper arm, left underarm, left hand, right upper arm, bottom right arm, the right hand, left thigh, left leg, left foot, right thigh, right leg and right crus of diaphragm.
As shown in Figure 4 is the process flow diagram of an embodiment of standard bone adaptation method of the present invention, and the standard bone adaptation step among the present invention comprises:
1, the system prompt user makes the adaptive attitude of standard, promptly vertically stands, and both hands are flat to be stretched, and guarantees that limbs are in the visual range of depth image sensing unit.Also can the user need not make specific adaptive attitude, but that adaption system goes automatically is adaptive.
2, the user makes the adaptive attitude of standard according to prompting, and keeps static.
3, system puts from the basin position of bone of standard skeleton framework that to begin to carry out bone adaptive, guarantees that the basin position of bone puts with limbs profile correspondence position consistent.
4, move, the skeleton trunci of convergent-divergent standard skeleton framework is to proper height, guarantees that head skeleton and limbs profile correspondence position are consistent.
5, move, the lower limb skeletons of convergent-divergent standard skeleton framework is to correct position, guarantees that foot and limbs profile correspondence position are consistent.
6, move, the upper limbs bone of convergent-divergent standard skeleton framework is to correct position, guarantees that both hands and limbs profile correspondence position are consistent.
7, whole bones of the adaptive limbs bone framework that obtains of inspection whether with actual limbs outline.If not, then get back to above-mentioned the 3rd step, begin again adaptive from the basin bone.Be then to get into next step.
The inspection here is meant whether contrast adaptive skeleton key point position consistent with user's limbs profile phase, such as the head key point on the bone whether at the head position of user profile etc.
8, the system prompt user begins to do any action.
9, whether inspection whole body bone is consistent with the athletic performance of actual limbs profile.
10, accomplish the bone matching process.
The present invention passes through based on depth image sensor and depth image processing unit; Can user's limbs image be separated from complex background efficiently, and can obtain the depth information of scene image, and then can rebuild the skeletal system of user's limbs; The limb action that can discern and follow the tracks of the user further; Final user's limb action identifying of accomplishing improves the limb action recognition efficiency, improves the man-machine interaction user experience.
The above is merely preferred embodiment of the present invention, not in order to restriction the present invention, all any modifications of within spirit of the present invention and principle, being done, is equal to and replaces and improvement etc., all should be included within protection scope of the present invention.
Claims (10)
1. the limb action recognition system based on the depth image induction is characterized in that, comprising:
Depth image information acquisition unit: the depth image information that is used to obtain user and place environment thereof;
Limbs contour extraction unit: be used for extracting user's limbs profile from the background of above-mentioned depth image information;
Standard bone adaptation unit: be used for changing respectively the size of standard skeleton framework various piece, itself and above-mentioned user's limbs profile phase are adapted to, obtain adaptive limbs bone framework corresponding to this user;
Skeleton motion tracking cell: be used in depth image information, follow the tracks of, extract the data of the motion of expressing user's limbs with the form of adaptive limbs bone framework;
Limb action recognition unit: be used for limb action according to the data identification user of the motion of above-mentioned expression user limbs.
2. limb action recognition system as claimed in claim 1 is characterized in that, said depth image information acquisition unit further comprises:
Depth image sensing unit: be used for belonging to direction emission infrared structure primary light plane, and receive and three-dimensional environment object infrared structure light that induction is returned via user and place Ambient thereof through coding to the user; With
Depth image processing unit: be used for coding and the coding of prototype structure optical plane through contrasting above-mentioned three-dimensional environment object infrared structure light, obtain depth image information.
3. limb action recognition system as claimed in claim 2 is characterized in that, said limbs contour extraction unit extracts user's limbs profile according to the motion difference analysis in the continuous depth image information.
4. limb action recognition system as claimed in claim 3 is characterized in that, said standard bone adaptation unit carries out adaptive through convergent-divergent, rotation, calculating Method for Deformation to said standard skeleton framework.
5. limb action recognition system as claimed in claim 4; It is characterized in that said standard skeleton framework comprises according to the interconnective head of normal human's structure, trunk, basin bone, left upper arm, left underarm, left hand, right upper arm, bottom right arm, the right hand, left thigh, left leg, left foot, right thigh, right leg and right crus of diaphragm.
6. the limb action recognition methods based on the depth image induction is characterized in that, comprises the steps:
A, obtain the user and the place environment depth image information;
B, from the background of above-mentioned depth image information, extract user's limbs profile;
C, change the size of various piece in the standard skeleton framework respectively, itself and above-mentioned user's limbs profile phase are adapted to, obtain adaptive limbs bone framework corresponding to this user;
D, in depth image information, follow the tracks of, extract the data of the motion of expressing user's limbs with the form of adaptive limbs bone framework;
E, according to the data identification user's of the motion of expressing user's limbs limb action.
7. limb action recognition methods as claimed in claim 6 is characterized in that, further comprises the steps: among the said step C
C1, according to the user's limbs profile that obtains among standard skeleton framework and the step B, guarantee that the correspondence position of basin bone is consistent;
C2, move, the skeleton trunci of convergent-divergent standard skeleton framework is to proper height, guarantees that the correspondence position of head skeleton is consistent;
C3, move, the lower limb skeletons of convergent-divergent standard skeleton framework is to correct position, guarantees that the correspondence position of foot is consistent;
C4, move, the upper limbs bone of convergent-divergent standard skeleton framework is to correct position, guarantees that the both hands correspondence position is consistent;
Whether C5, the adaptive skeleton key point of inspection contrast position be consistent with said user's limbs profile phase, if deny, then gets back to above-mentioned steps C1, begins again adaptive from the basin bone; If then get into next step.
8. limb action recognition methods as claimed in claim 6 is characterized in that, adopts binocular vision technology or flying time technology or structure light coding technology to obtain the depth image information of user and place environment thereof in the said steps A.
9. limb action recognition methods as claimed in claim 8 is characterized in that, said steps A adopts the structure light coding technology to obtain the method for the depth image information of user and place environment thereof, further comprises the steps:
A1, belong to direction emission infrared structure primary light plane, and receive and three-dimensional environment object infrared structure light that induction is returned via user and place Ambient thereof through coding to the user;
A2, the coding that passes through the above-mentioned three-dimensional environment object infrared structure light of contrast and the coding of prototype structure optical plane obtain depth image information.
10. limb action recognition methods as claimed in claim 7 is characterized in that, also comprises:
C6, whether inspection limbs profile is consistent with the actual motion of whole body bone when user action.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110160313XA CN102831380A (en) | 2011-06-15 | 2011-06-15 | Body action identification method and system based on depth image induction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201110160313XA CN102831380A (en) | 2011-06-15 | 2011-06-15 | Body action identification method and system based on depth image induction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102831380A true CN102831380A (en) | 2012-12-19 |
Family
ID=47334511
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201110160313XA Pending CN102831380A (en) | 2011-06-15 | 2011-06-15 | Body action identification method and system based on depth image induction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102831380A (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103735268A (en) * | 2013-09-29 | 2014-04-23 | 沈阳东软医疗系统有限公司 | Body position detecting method and system |
CN103995587A (en) * | 2014-05-13 | 2014-08-20 | 联想(北京)有限公司 | Information control method and electronic equipment |
CN104353240A (en) * | 2014-11-27 | 2015-02-18 | 北京师范大学珠海分校 | Running machine system based on Kinect |
CN104361321A (en) * | 2014-11-13 | 2015-02-18 | 侯振杰 | Methods of judging fall behaviors and body balance for old people |
CN104598867A (en) * | 2013-10-30 | 2015-05-06 | 中国艺术科技研究所 | Automatic evaluation method of human body action and dance scoring system |
CN104834913A (en) * | 2015-05-14 | 2015-08-12 | 中国人民解放军理工大学 | Flag signal identification method and apparatus based on depth image |
CN105229411A (en) * | 2013-04-15 | 2016-01-06 | 微软技术许可有限责任公司 | Sane three-dimensional depth system |
CN105824006A (en) * | 2014-12-23 | 2016-08-03 | 国家电网公司 | Method for eliminating safety hidden danger of substation personnel |
CN106022213A (en) * | 2016-05-04 | 2016-10-12 | 北方工业大学 | Human body motion recognition method based on three-dimensional bone information |
CN106250867A (en) * | 2016-08-12 | 2016-12-21 | 南京华捷艾米软件科技有限公司 | A kind of skeleton based on depth data follows the tracks of the implementation method of system |
CN106529399A (en) * | 2016-09-26 | 2017-03-22 | 深圳奥比中光科技有限公司 | Human body information acquisition method, device and system |
CN106606363A (en) * | 2015-10-22 | 2017-05-03 | 上海西门子医疗器械有限公司 | Method and system for determining body position of patient in medical equipment and medical equipment |
CN106650217A (en) * | 2015-10-29 | 2017-05-10 | 佳能市场营销日本株式会社 | Information processing apparatus and information processing method |
CN107346172A (en) * | 2016-05-05 | 2017-11-14 | 富泰华工业(深圳)有限公司 | A kind of action induction method and device |
CN108027441A (en) * | 2015-09-08 | 2018-05-11 | 微视公司 | Mixed mode depth detection |
CN108255173A (en) * | 2017-12-20 | 2018-07-06 | 北京理工大学 | Robot follows barrier-avoiding method and device |
CN108399367A (en) * | 2018-01-31 | 2018-08-14 | 深圳市阿西莫夫科技有限公司 | Hand motion recognition method, apparatus, computer equipment and readable storage medium storing program for executing |
CN108510594A (en) * | 2018-02-27 | 2018-09-07 | 吉林省行氏动漫科技有限公司 | Virtual fit method, device and terminal device |
CN108497568A (en) * | 2018-03-18 | 2018-09-07 | 江苏特力威信息系统有限公司 | A kind of gym suit and limbs measurement method and device based on Quick Response Code identification |
CN108970084A (en) * | 2018-06-29 | 2018-12-11 | 西安深睐信息科技有限公司 | A kind of moving scene analogy method of Behavior-based control identification |
CN109074641A (en) * | 2016-04-28 | 2018-12-21 | 富士通株式会社 | Bone estimation device, bone estimation method and bone estimate program |
CN109389054A (en) * | 2018-09-21 | 2019-02-26 | 北京邮电大学 | Intelligent mirror design method based on automated graphics identification and action model comparison |
WO2019218111A1 (en) * | 2018-05-14 | 2019-11-21 | 合刃科技(武汉)有限公司 | Electronic device and photographing control method |
CN110561399A (en) * | 2019-09-16 | 2019-12-13 | 腾讯科技(深圳)有限公司 | Auxiliary shooting device for dyskinesia condition analysis, control method and device |
CN110569711A (en) * | 2019-07-19 | 2019-12-13 | 沈阳工业大学 | human body action oriented recognition method |
CN111292087A (en) * | 2020-01-20 | 2020-06-16 | 北京沃东天骏信息技术有限公司 | Identity verification method and device, computer readable medium and electronic equipment |
CN111638709A (en) * | 2020-03-24 | 2020-09-08 | 上海黑眸智能科技有限责任公司 | Automatic obstacle avoidance tracking method, system, terminal and medium |
WO2021057027A1 (en) * | 2019-09-27 | 2021-04-01 | 北京市商汤科技开发有限公司 | Human body detection method and apparatus, computer device, and storage medium |
CN112907890A (en) * | 2020-12-08 | 2021-06-04 | 泰州市朗嘉馨网络科技有限公司 | Automatic change protection platform |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101256672A (en) * | 2008-03-21 | 2008-09-03 | 北京中星微电子有限公司 | Object image depth restruction apparatus based on video camera apparatus as well as projecting apparatus thereof |
CN101388114A (en) * | 2008-09-03 | 2009-03-18 | 北京中星微电子有限公司 | Method and system for estimating human body attitudes |
CN101765020A (en) * | 2008-12-23 | 2010-06-30 | 康佳集团股份有限公司 | Television capable of sensing stereo image |
US20100197400A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Visual target tracking |
-
2011
- 2011-06-15 CN CN201110160313XA patent/CN102831380A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101256672A (en) * | 2008-03-21 | 2008-09-03 | 北京中星微电子有限公司 | Object image depth restruction apparatus based on video camera apparatus as well as projecting apparatus thereof |
CN101388114A (en) * | 2008-09-03 | 2009-03-18 | 北京中星微电子有限公司 | Method and system for estimating human body attitudes |
CN101765020A (en) * | 2008-12-23 | 2010-06-30 | 康佳集团股份有限公司 | Television capable of sensing stereo image |
US20100197400A1 (en) * | 2009-01-30 | 2010-08-05 | Microsoft Corporation | Visual target tracking |
Cited By (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105229411A (en) * | 2013-04-15 | 2016-01-06 | 微软技术许可有限责任公司 | Sane three-dimensional depth system |
US9928420B2 (en) | 2013-04-15 | 2018-03-27 | Microsoft Technology Licensing, Llc | Depth imaging system based on stereo vision and infrared radiation |
US10929658B2 (en) | 2013-04-15 | 2021-02-23 | Microsoft Technology Licensing, Llc | Active stereo with adaptive support weights from a separate image |
US10928189B2 (en) | 2013-04-15 | 2021-02-23 | Microsoft Technology Licensing, Llc | Intensity-modulated light pattern for active stereo |
US10268885B2 (en) | 2013-04-15 | 2019-04-23 | Microsoft Technology Licensing, Llc | Extracting true color from a color and infrared sensor |
US10816331B2 (en) | 2013-04-15 | 2020-10-27 | Microsoft Technology Licensing, Llc | Super-resolving depth map by moving pattern projector |
CN105229411B (en) * | 2013-04-15 | 2019-09-03 | 微软技术许可有限责任公司 | Steady three-dimensional depth system |
CN103735268A (en) * | 2013-09-29 | 2014-04-23 | 沈阳东软医疗系统有限公司 | Body position detecting method and system |
CN103735268B (en) * | 2013-09-29 | 2015-11-25 | 沈阳东软医疗系统有限公司 | A kind of position detection method and system |
CN104598867A (en) * | 2013-10-30 | 2015-05-06 | 中国艺术科技研究所 | Automatic evaluation method of human body action and dance scoring system |
CN104598867B (en) * | 2013-10-30 | 2017-12-01 | 中国艺术科技研究所 | A kind of human action automatic evaluation method and dancing points-scoring system |
CN103995587A (en) * | 2014-05-13 | 2014-08-20 | 联想(北京)有限公司 | Information control method and electronic equipment |
CN103995587B (en) * | 2014-05-13 | 2017-09-29 | 联想(北京)有限公司 | A kind of information control method and electronic equipment |
CN104361321A (en) * | 2014-11-13 | 2015-02-18 | 侯振杰 | Methods of judging fall behaviors and body balance for old people |
CN104361321B (en) * | 2014-11-13 | 2018-02-09 | 侯振杰 | A kind of method for judging the elderly and falling down behavior and balance ability |
CN104353240A (en) * | 2014-11-27 | 2015-02-18 | 北京师范大学珠海分校 | Running machine system based on Kinect |
CN105824006A (en) * | 2014-12-23 | 2016-08-03 | 国家电网公司 | Method for eliminating safety hidden danger of substation personnel |
CN104834913A (en) * | 2015-05-14 | 2015-08-12 | 中国人民解放军理工大学 | Flag signal identification method and apparatus based on depth image |
CN104834913B (en) * | 2015-05-14 | 2018-04-03 | 中国人民解放军理工大学 | Semaphore recognition methods and device based on depth image |
CN108027441A (en) * | 2015-09-08 | 2018-05-11 | 微视公司 | Mixed mode depth detection |
CN106606363A (en) * | 2015-10-22 | 2017-05-03 | 上海西门子医疗器械有限公司 | Method and system for determining body position of patient in medical equipment and medical equipment |
CN106650217A (en) * | 2015-10-29 | 2017-05-10 | 佳能市场营销日本株式会社 | Information processing apparatus and information processing method |
CN109074641B (en) * | 2016-04-28 | 2022-02-11 | 富士通株式会社 | Bone estimation device, bone estimation method, and bone estimation program |
CN109074641A (en) * | 2016-04-28 | 2018-12-21 | 富士通株式会社 | Bone estimation device, bone estimation method and bone estimate program |
CN106022213A (en) * | 2016-05-04 | 2016-10-12 | 北方工业大学 | Human body motion recognition method based on three-dimensional bone information |
CN106022213B (en) * | 2016-05-04 | 2019-06-07 | 北方工业大学 | A kind of human motion recognition method based on three-dimensional bone information |
CN107346172A (en) * | 2016-05-05 | 2017-11-14 | 富泰华工业(深圳)有限公司 | A kind of action induction method and device |
CN106250867A (en) * | 2016-08-12 | 2016-12-21 | 南京华捷艾米软件科技有限公司 | A kind of skeleton based on depth data follows the tracks of the implementation method of system |
US10417775B2 (en) | 2016-08-12 | 2019-09-17 | Nanjing Huajie Imi Technology Co., Ltd. | Method for implementing human skeleton tracking system based on depth data |
CN106250867B (en) * | 2016-08-12 | 2017-11-14 | 南京华捷艾米软件科技有限公司 | A kind of implementation method of the skeleton tracking system based on depth data |
CN106529399A (en) * | 2016-09-26 | 2017-03-22 | 深圳奥比中光科技有限公司 | Human body information acquisition method, device and system |
CN108255173A (en) * | 2017-12-20 | 2018-07-06 | 北京理工大学 | Robot follows barrier-avoiding method and device |
CN108399367A (en) * | 2018-01-31 | 2018-08-14 | 深圳市阿西莫夫科技有限公司 | Hand motion recognition method, apparatus, computer equipment and readable storage medium storing program for executing |
CN108399367B (en) * | 2018-01-31 | 2020-06-23 | 深圳市阿西莫夫科技有限公司 | Hand motion recognition method and device, computer equipment and readable storage medium |
CN108510594A (en) * | 2018-02-27 | 2018-09-07 | 吉林省行氏动漫科技有限公司 | Virtual fit method, device and terminal device |
CN108497568A (en) * | 2018-03-18 | 2018-09-07 | 江苏特力威信息系统有限公司 | A kind of gym suit and limbs measurement method and device based on Quick Response Code identification |
WO2019218111A1 (en) * | 2018-05-14 | 2019-11-21 | 合刃科技(武汉)有限公司 | Electronic device and photographing control method |
CN108970084A (en) * | 2018-06-29 | 2018-12-11 | 西安深睐信息科技有限公司 | A kind of moving scene analogy method of Behavior-based control identification |
CN109389054A (en) * | 2018-09-21 | 2019-02-26 | 北京邮电大学 | Intelligent mirror design method based on automated graphics identification and action model comparison |
CN110569711A (en) * | 2019-07-19 | 2019-12-13 | 沈阳工业大学 | human body action oriented recognition method |
CN110569711B (en) * | 2019-07-19 | 2022-07-15 | 沈阳工业大学 | Human body action oriented recognition method |
JP7326465B2 (en) | 2019-09-16 | 2023-08-15 | テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド | Auxiliary imaging device, control method and device for movement disorder disease analysis |
CN110561399A (en) * | 2019-09-16 | 2019-12-13 | 腾讯科技(深圳)有限公司 | Auxiliary shooting device for dyskinesia condition analysis, control method and device |
WO2021052208A1 (en) * | 2019-09-16 | 2021-03-25 | 腾讯科技(深圳)有限公司 | Auxiliary photographing device for movement disorder disease analysis, control method and apparatus |
US11945125B2 (en) | 2019-09-16 | 2024-04-02 | Tencent Technology (Shenzhen) Company Limited | Auxiliary photographing device for dyskinesia analysis, and control method and apparatus for auxiliary photographing device for dyskinesia analysis |
JP2022527007A (en) * | 2019-09-16 | 2022-05-27 | テンセント・テクノロジー・(シェンジェン)・カンパニー・リミテッド | Auxiliary imaging device, control method and device for analysis of movement disorder disease |
WO2021057027A1 (en) * | 2019-09-27 | 2021-04-01 | 北京市商汤科技开发有限公司 | Human body detection method and apparatus, computer device, and storage medium |
CN111292087A (en) * | 2020-01-20 | 2020-06-16 | 北京沃东天骏信息技术有限公司 | Identity verification method and device, computer readable medium and electronic equipment |
CN111638709A (en) * | 2020-03-24 | 2020-09-08 | 上海黑眸智能科技有限责任公司 | Automatic obstacle avoidance tracking method, system, terminal and medium |
CN112907890A (en) * | 2020-12-08 | 2021-06-04 | 泰州市朗嘉馨网络科技有限公司 | Automatic change protection platform |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102831380A (en) | Body action identification method and system based on depth image induction | |
Suarez et al. | Hand gesture recognition with depth images: A review | |
CN1304931C (en) | Head carried stereo vision hand gesture identifying device | |
CN103529944B (en) | A kind of human motion recognition method based on Kinect | |
CN105389539B (en) | A kind of three-dimension gesture Attitude estimation method and system based on depth data | |
CN104102412B (en) | A kind of hand-held reading device and method thereof based on augmented reality | |
CN102999152B (en) | A kind of gesture motion recognition methods and system | |
US20180186452A1 (en) | Unmanned Aerial Vehicle Interactive Apparatus and Method Based on Deep Learning Posture Estimation | |
Kaur et al. | A review: Study of various techniques of Hand gesture recognition | |
Li et al. | A web-based sign language translator using 3d video processing | |
CN104331158B (en) | The man-machine interaction method and device of a kind of gesture control | |
CN107632699B (en) | Natural human-machine interaction system based on the fusion of more perception datas | |
CN202150897U (en) | Body feeling control game television set | |
CN104731307B (en) | A kind of body-sensing action identification method and human-computer interaction device | |
CN109044651A (en) | Method for controlling intelligent wheelchair and system based on natural gesture instruction in circumstances not known | |
CN108876881A (en) | Figure self-adaptation three-dimensional virtual human model construction method and animation system based on Kinect | |
CN103207667A (en) | Man-machine interaction control method and application thereof | |
CN204028887U (en) | A kind of reading of the hand-held based on augmented reality equipment | |
CN104460967A (en) | Recognition method of upper limb bone gestures of human body | |
CN203630822U (en) | Virtual image and real scene combined stage interaction integrating system | |
CN110866468A (en) | Gesture recognition system and method based on passive RFID | |
CN109395375A (en) | A kind of 3d gaming method of interface interacted based on augmented reality and movement | |
Kakkoth et al. | Survey on real time hand gesture recognition | |
CN109426336A (en) | A kind of virtual reality auxiliary type selecting equipment | |
Abdallah et al. | An overview of gesture recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20121219 |