CN104933734B - A kind of human body attitude data fusion method based on more kinect - Google Patents

A kind of human body attitude data fusion method based on more kinect Download PDF

Info

Publication number
CN104933734B
CN104933734B CN201510363869.7A CN201510363869A CN104933734B CN 104933734 B CN104933734 B CN 104933734B CN 201510363869 A CN201510363869 A CN 201510363869A CN 104933734 B CN104933734 B CN 104933734B
Authority
CN
China
Prior art keywords
kinect
data
skeleton
skeleton point
mrow
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510363869.7A
Other languages
Chinese (zh)
Other versions
CN104933734A (en
Inventor
朱虹
卫永波
谢凡凡
权甲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Technology
Original Assignee
Xian University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Technology filed Critical Xian University of Technology
Priority to CN201510363869.7A priority Critical patent/CN104933734B/en
Publication of CN104933734A publication Critical patent/CN104933734A/en
Application granted granted Critical
Publication of CN104933734B publication Critical patent/CN104933734B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of human body attitude data fusion method based on more kinect, by building the data collecting system containing more kinect, human body attitude bone information is acquired, data fusion and data prediction are carried out respectively for the classification of gathered information, so as to obtain complete human body attitude bone information, avoiding kinect and being interfered causes skeleton data saltus step occur, or personage causes the detection incomplete problem of skeleton data such as posture missing from blocking, and is that gesture recognition, man-machine interaction, virtual reality etc. are provided convenience in post-processing.

Description

A kind of human body attitude data fusion method based on more kinect
Technical field
The invention belongs to human body attitude detection and gesture recognition technical field, it is related to a kind of human body appearance based on more kinect State data fusion method.
Background technology
With the development in epoch, the mode of man-machine interaction is also more and more, can not only be exchanged by language, Body language is also indispensable part, and a branch of computer vision field is exactly study human body behavior posture specific Implication.However, it is understood that human body behavior posture, first has to get the specific posture of personage, conventional mode is using common Camera, then use pattern identification carries out analysis identification to the character contour in video, but this mode accuracy rate can't Meets the needs of man-machine interaction.
Under this application demand, Kinect arises at the historic moment, it be game host Xbox360 of the Microsoft under it and A kind of motion perception input equipment that windows platform provides.Kinect is really one and utilizes three-dimensional fix technology Body-sensing camera, the functions such as real-time action seizure, image identification, phonetic entry, speech recognition and community interactive can be carried out.Cause This user can get personage's framework information using Kinect with the SDK of Microsoft, and each skeleton point is a three-dimensional coordinate Information.But Kinect using when need user plane to Kinect, the whole human body attitude of personage could be obtained, so Kinect The problem of for blocking certainly, does not have good solution.
The content of the invention
It is an object of the invention to provide a kind of human body attitude data fusion method based on more kinect, solve existing When extracting skeleton information, there is saltus step or occurred from posture missing when blocking kinect in the skeleton data that is interfered Problem.
The technical scheme is that a kind of human body attitude data fusion method based on more kinect, specific steps are such as Under:
Step 1, data collecting system is built:
It is orthogonal to put two kinect, make it towards shooting area, kinect1 and kinect 2 respectively with computer a and electricity Brain b connections, LAN is established between computer a and computer b connections, to carry out data transmission;
Step 2, data acquisition:
The skeleton data that kinect 2 is obtained is sent to computer a by open system, computer b by LAN in real time, simultaneously Computer a obtains kinect 1 skeleton data in real time, wherein each skeleton data includes G(i,j)And F(i,j)Two parts:G(i,j)It is J-th of bone site coordinate of human body using i-th of kinect as the origin of coordinates;F(i,j)It is whether i-th of kinect traces into people The label information of j-th of skeleton point of body, F(i,j)Be worth for 0 when, represent i-th of kinect do not trace into j-th of bone site; F(i,j)Be worth for 1 when, represent i-th of kinect trace into j-th of bone site;
Wherein i represents kinect numbering, and i is 1 or 2;The numbering of j expression skeleton points, 0<J≤bone points;
Step 3, data fusion mode is selected:
Classified for kinect 1 and kinect 2 skeleton datas obtained, determine different pieces of information convergence strategy;
Step 4, data fusion:
Data fusion is carried out for the skeleton data for meeting data fusion condition;
Step 5, data prediction:
Skeleton data for not meeting data fusion condition, ask simple average using the movement change amount to coordinate data Value is predicted.
The features of the present invention also resides in,
Different Strategy of data fusion described in step 3, it is specially:
1) for meet A=j | F(1,j)=1 } skeleton point, it is believed that obtained the skeleton point information of the point, ignored The skeleton point that kinect 2 is detected;
2) for meet B=j | F(1,j)=0, F(2,j)=1 } skeleton point, into step 4, data fusion is carried out;
3) for meet C=j | F(1,j)=0, F(2,j)=0 } skeleton point, into step 5, data prediction is carried out.
Step 4 is specially:
4.1 find the skeleton point minimum subscript that two kinect are traced into, that is, ask and meet F(1,j)=1 and F(2,j)=1 Skeleton point, using wherein minimum j values as the datum mark of fused data, it is designated as skeleton point t:
T=min j | F(1,j)=1, F(2,j)=1,0 < j≤skeleton point number }
If not finding the t of the condition of satisfaction, step 5 is directly entered, to all F(1,j)It is pre- that=0 skeleton point carries out data Survey;
4.2 for all F(2,j)=1 skeleton point, skeleton point that all kinect 2 are traced into is calculated relative to skeleton point T coordinate offset amount εj
εj=G(2,j)-G(2,t),j∈B
Wherein G(2,j)For kinect 2 skeleton point j position coordinates, B is the bone for meeting data fusion condition in step 3 Bone point j set, G(2,t)For kinect 2 skeleton point t position coordinates;
4.3 according to coordinate offset amount εjThe undetected bone site data G of all kinect 1 are obtained in calculating(1,j), and It is 1 by the mark position of these skeleton points, i.e., by F(1,j)It is set to 1:
G(1,j)=G(1,t)j,j∈B
Wherein G(1,j)For kinect 1 skeleton point j position coordinates, B is the bone for meeting data fusion condition in step 3 Bone point j set, G(1,t)For kinect 1 skeleton point t position coordinates.
Being used in step 5 asks simple average value to be predicted the movement change amount of coordinate data, comprises the following steps that:
5.1 seek the average value of the change in displacement of the bone site of continuous n frames:
Wherein, T is current frame number;K is frame shifting amount;N is the frame number for needing calculation position to change;δ(j,T)Represent jth Average value of the individual skeleton point in the preceding n frames change in displacement of T frames;G(1,j,T-k)Represent kinect1 j-th of skeleton point in T-k The position coordinates of frame, G(1,j,T-k-1)Represent kinect1 position coordinates of j-th of skeleton point in T-k-1 frames;
5.2 try to achieve bone site at the time of needing prediction:
G(1,j,T)=G(1,j,T-1)(j,T)
Wherein, G(1,j,T)J-th of the skeleton point position coordinates of T frames for needing to predict for kinect1, G(1,j,T-1)For Kinect1 needs j-th of the skeleton point position coordinates of T-1 frames predicted.
The invention has the advantages that skeleton information is acquired by using more Kinect, and for adopting The information category collected, data fusion sum is carried out it was predicted that being extracted complete human body attitude bone information, is solved existing Kinect be interfered skeleton data occur saltus step or occur from block when posture missing the problem of, be post-processing in posture Identification, man-machine interaction, virtual reality etc. are provided convenience.
Brief description of the drawings
Fig. 1 is a kind of hardware chart of the human body attitude data fusion method based on more kinect of the present invention.
In figure, the shooting area of 1.kinect 1,2.kinect 2,3., 4. computer a, 5. computer b.
Embodiment
The present invention is described in detail with reference to the accompanying drawings and detailed description.
A kind of human body attitude data fusion method based on more kinect of the present invention, specifically implements according to the following steps:
Step 1, data collecting system is built:
As shown in figure 1, orthogonal put two kinect, making it, shooting area 3 is by two kinect towards shooting area 3 Camera watch region determine that kinect1 1 and kinect 22 be connected with computer a4 and computer b5 respectively, computer a4 and computer b5 companies LAN is established between connecing, to carry out data transmission.
Acquiescence computer a is primary processor, during system operation, personage's appearance that computer b in real time can detect Kinect 2 State skeleton data is sent to computer a by LAN, while computer a can also obtain kinect 1 posture skeleton data in real time. Then computer a can call the more kinect of blending algorithm progress data fusion simultaneously.
Step 2, data acquisition:
Open system, 20 skeleton datas that computer b in real time obtains kinect 2 are sent to computer a by LAN, Computer a obtains kinect 1 skeleton data in real time simultaneously.Wherein each skeleton data includes G(i,j)And F(i,j)Two parts: G(i,j)It is j-th of bone site coordinate of human body using i-th of kinect as the origin of coordinates;F(i,j)Be i-th kinect whether with Label information of the track to j-th of skeleton point of human body.F(i,j)Be worth for 0 when, represent i-th of kinect do not trace into j-th of bone Position;F(i,j)Be worth for 1 when, represent i-th of kinect trace into j-th of bone site.
Wherein i represents kinect numbering, and i is 1 or 2;The numbering of j expression skeleton points, 0<J≤bone points.
Step 3, data fusion mode is selected:
Because kinect 1 and kinect 2 coordinate information are all and the data fusions using respective kinect as the origin of coordinates Purpose be merge kinect 1 and kinect 2 skeleton data, to complete a complete human body attitude data, therefore adopt Convergence strategy is as follows:
1) for meet A=j | F(1,j)=1 } skeleton point, it is believed that obtained the skeleton point information of the point, will ignore The skeleton point that kinect 2 is detected;
2) for meet B=j | F(1,j)=0, F(2,j)=1 } skeleton point, into step 4, data fusion is carried out;
3) for meet C=j | F(1,j)=0, F(2,j)=0 } skeleton point, into step 5, data prediction is carried out.
Step 4, data fusion:
4.1 find the skeleton point minimum subscript that two kinect are traced into, that is, ask and meet F(1,j)=1 and F(2,j)=1 Skeleton point, using wherein minimum j values as the datum mark of fused data, it is designated as skeleton point t.Benchmark skeleton point t's asks for formula such as Under:
T=min j | F(1,j)=1, F(2,j)=1,0 < j≤skeleton point number } (1)
If not finding the t of the condition of satisfaction, step 5 is directly entered, to all F(1,j)It is pre- that=0 skeleton point carries out data Survey;
4.2 for all F(2,j)=1 skeleton point, skeleton point that all kinect 2 are traced into is calculated relative to benchmark bone Bone point t coordinate offset amount εj
εj=G(2,j)-G(2,t),j∈B (2)
Wherein G(2,j)For kinect 2 skeleton point j position coordinates, B is the bone for meeting data fusion condition in step 3 Bone point j set, G(2,t)For kinect 2 skeleton point t position coordinates;
4.3 according to coordinate offset amount εjThe undetected bone site data G of all kinect 1 are obtained in calculating(1,j), and It is 1 by the mark position of these skeleton points, i.e., by F(1,j)It is set to 1:
G(1,j)=G(1,t)j,j∈B (3)
Wherein G(1,j)For kinect 1 skeleton point j position coordinates, B is the bone for meeting data fusion condition in step 3 Bone point j set, G(1,t)For kinect 1 skeleton point t position coordinates.
Step 5, data prediction:
After step 3 fusing stage, there may be bony segment coordinate information not get, these bones are sat Mark information is predicted, i.e., to all F(1,j)Still it is predicted for 0 skeleton point coordinate information.Because consider it is ageing and The continuity of figure action, the present invention ask simple average value to be predicted using the movement change amount to coordinate data, specific step It is rapid as follows:
5.1 seek the average value of the change in displacement of the bone site of continuous n frames:
Wherein, T is current frame number;K is frame shifting amount;N is the frame number for needing calculation position to change;δ(j,T)Represent jth Average value of the individual skeleton point in the preceding n frames change in displacement of T frames;G(1,j,T-k)Represent kinect1 j-th of skeleton point in T-k The position coordinates of frame, G(1,j,T-k-1)Represent kinect1 position coordinates of j-th of skeleton point in T-k-1 frames.
5.2 try to achieve bone site at the time of needing prediction:
G(1,j,T)=G(1,j,T-1)(j,T) (5)
Wherein, G(1,j,T)J-th of the skeleton point position coordinates of T frames for needing to predict for kinect1, G(1,j,T-1)For Kinect1 needs j-th of the skeleton point position coordinates of T-1 frames predicted.
By above step, the position coordinates of all bones is all tracked or predicts, and has obtained complete human body attitude Bone information, solve and do not detect asking for personage's bone information when blocking certainly occurs in personage and Kinect is interfered Topic, it is that gesture recognition, man-machine interaction, virtual reality etc. are provided convenience in post-processing.

Claims (2)

1. a kind of human body attitude data fusion method based on more kinect, it is characterised in that comprise the following steps that:
Step 1, data collecting system is built:
It is orthogonal to put two kinect, it is connected respectively with computer a and computer b towards shooting area, kinect1 and kinect 2 Connect, LAN is established between computer a and computer b connections, to carry out data transmission;
Step 2, data acquisition:
The skeleton data that kinect 2 is obtained is sent to computer a, while computer a by open system, computer b by LAN in real time Kinect 1 skeleton data is obtained in real time, wherein each skeleton data includes G(i,j)And F(i,j)Two parts:G(i,j)It is with i-th Individual kinect is j-th of bone site coordinate of human body of the origin of coordinates;F(i,j)It is whether i-th of kinect traces into human body jth The label information of individual skeleton point, F(i,j)Be worth for 0 when, represent i-th of kinect do not trace into j-th of bone site;F(i,j)Value For 1 when, represent i-th of kinect trace into j-th of bone site;
Wherein i represents kinect numbering, and i is 1 or 2;The numbering of j expression skeleton points, 0<J≤bone points;
Step 3, data fusion mode is selected:
Classified for kinect 1 and kinect 2 skeleton datas obtained, determine different pieces of information convergence strategy, be specially:
3.1 for meet A=j | F(1,j)=1 } skeleton point, it is believed that obtained the skeleton point information of the point, ignored kinect 2 The skeleton point detected;
3.2 for meet B=j | F(1,j)=0, F(2,j)=1 } skeleton point, into step 4, data fusion is carried out;
3.3 for meet C=j | F(1,j)=0, F(2,j)=0 } skeleton point, into step 5, data prediction is carried out;
Step 4, data fusion:
Data fusion is carried out for the skeleton data for meeting data fusion condition;
4.1 find the skeleton point minimum subscript that two kinect are traced into, that is, ask and meet F(1,j)=1 and F(2,j)=1 bone Point, using wherein minimum j values as the datum mark of fused data, it is designated as skeleton point t:
T=min j | F(1,j)=1, F(2,j)=1,0 < j≤skeleton point number }
If not finding the t of the condition of satisfaction, step 5 is directly entered, to all F(1,j)=0 skeleton point carries out data prediction;
4.2 for all F(2,j)=1 skeleton point, skeleton point that all kinect 2 are traced into is calculated relative to skeleton point t's Coordinate offset amount εj
εj=G(2,j)-G(2,t),j∈B
Wherein G(2,j)For kinect 2 skeleton point j position coordinates, B is the skeleton point j for meeting data fusion condition in step 3 Set, G(2,t)For kinect 2 skeleton point t position coordinates;
4.3 according to coordinate offset amount εjThe undetected bone site data G of all kinect 1 are obtained in calculating(1,j), and by this The mark position of a little skeleton points is 1, i.e., by F(1,j)It is set to 1:
G(1,j)=G(1,t)j,j∈B
Wherein G(1,j)For kinect 1 skeleton point j position coordinates, B is the skeleton point j for meeting data fusion condition in step 3 Set, G(1,t)For kinect 1 skeleton point t position coordinates;
Step 5, data prediction:
Skeleton data for not meeting data fusion condition, simple average value is asked to enter using the movement change amount to coordinate data Row prediction.
A kind of 2. human body attitude data fusion method based on more kinect according to claim 1, it is characterised in that step Being used described in rapid 5 asks simple average value to be predicted the movement change amount of coordinate data, comprises the following steps that:
5.1 seek the average value of the change in displacement of the bone site of continuous n frames:
<mrow> <msub> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>T</mi> <mo>)</mo> </mrow> </msub> <mo>=</mo> <mfrac> <mrow> <mo>(</mo> <msubsup> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>n</mi> </msubsup> <mo>(</mo> <msub> <mi>G</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>,</mo> <mi>T</mi> <mo>-</mo> <mi>k</mi> <mo>)</mo> </mrow> </msub> <mo>-</mo> <msub> <mi>G</mi> <mrow> <mo>(</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>,</mo> <mi>T</mi> <mo>-</mo> <mi>k</mi> <mo>-</mo> <mn>1</mn> <mo>)</mo> </mrow> </msub> <mo>)</mo> </mrow> <mi>n</mi> </mfrac> </mrow>
Wherein, T is current frame number;K is frame shifting amount;N is the frame number for needing calculation position to change;δ(j,T)Represent j-th of bone Average value of the bone o'clock in the preceding n frames change in displacement of T frames;G(1,j,T-k)Represent kinect1 j-th of skeleton point in T-k frames Position coordinates, G(1,j,T-k-1)Represent kinect1 position coordinates of j-th of skeleton point in T-k-1 frames;
5.2 try to achieve bone site at the time of needing prediction:
G(1,j,T)=G(1,j,T-1)(j,T)
Wherein, G(1,j,T)J-th of the skeleton point position coordinates of T frames for needing to predict for kinect1, G(1,j,T-1)Needed for kinect1 J-th of skeleton point position coordinates of the T-1 frames to be predicted.
CN201510363869.7A 2015-06-26 2015-06-26 A kind of human body attitude data fusion method based on more kinect Expired - Fee Related CN104933734B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510363869.7A CN104933734B (en) 2015-06-26 2015-06-26 A kind of human body attitude data fusion method based on more kinect

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510363869.7A CN104933734B (en) 2015-06-26 2015-06-26 A kind of human body attitude data fusion method based on more kinect

Publications (2)

Publication Number Publication Date
CN104933734A CN104933734A (en) 2015-09-23
CN104933734B true CN104933734B (en) 2017-11-28

Family

ID=54120887

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510363869.7A Expired - Fee Related CN104933734B (en) 2015-06-26 2015-06-26 A kind of human body attitude data fusion method based on more kinect

Country Status (1)

Country Link
CN (1) CN104933734B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447889A (en) * 2015-11-19 2016-03-30 北京理工大学 Remote virtual opera command system based body feeling
CN105740450A (en) * 2016-02-03 2016-07-06 浙江大学 Multi-Kinect based 3D human body posture database construction method
CN106372858A (en) * 2016-09-12 2017-02-01 成都集致生活科技有限公司 Recruitment system of building industry, and application method thereof
CN106981075A (en) * 2017-05-31 2017-07-25 江西制造职业技术学院 The skeleton point parameter acquisition devices of apery motion mimicry and its recognition methods
CN107274438B (en) * 2017-06-28 2020-01-17 山东大学 Single Kinect multi-person tracking system and method supporting mobile virtual reality application
CN107577451B (en) * 2017-08-03 2020-06-12 中国科学院自动化研究所 Multi-Kinect human body skeleton coordinate transformation method, processing equipment and readable storage medium
CN107563295B (en) * 2017-08-03 2020-07-28 中国科学院自动化研究所 Multi-Kinect-based all-dimensional human body tracking method and processing equipment
CN107993249A (en) * 2017-08-23 2018-05-04 北京航空航天大学 A kind of body gait data fusion method based on more Kinect
CN110866417A (en) * 2018-08-27 2020-03-06 阿里巴巴集团控股有限公司 Image processing method and device and electronic equipment
CN109373993A (en) * 2018-10-09 2019-02-22 深圳华侨城文化旅游科技股份有限公司 A kind of positioning system and method based on more somatosensory devices
CN109875562A (en) * 2018-12-21 2019-06-14 鲁浩成 A kind of human somatotype monitoring system based on the more visual analysis of somatosensory device
CN111582081A (en) * 2020-04-24 2020-08-25 西安交通大学 Multi-Kinect serial gait data space-time combination method and measuring device
CN112200126A (en) * 2020-10-26 2021-01-08 上海盛奕数字科技有限公司 Method for identifying limb shielding gesture based on artificial intelligence running
CN112891922B (en) * 2021-03-18 2022-11-22 山东梦幻视界智能科技有限公司 Virtual reality somatosensory interaction method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529944A (en) * 2013-10-17 2014-01-22 合肥金诺数码科技股份有限公司 Human body movement identification method based on Kinect
CN203673527U (en) * 2014-01-14 2014-06-25 河海大学常州校区 Human body three-dimension scanning hardware platform based on kinects

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103529944A (en) * 2013-10-17 2014-01-22 合肥金诺数码科技股份有限公司 Human body movement identification method based on Kinect
CN203673527U (en) * 2014-01-14 2014-06-25 河海大学常州校区 Human body three-dimension scanning hardware platform based on kinects

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
Estimating human motion from multiple Kinect Sensors;Stylianos Asteriadis 等;《International Conference on Mirage》;20131231;全文 *
Evaluation of Skeleton Trackers and Gesture Recognition for Human-robot Interaction;Martin Bünger;《Applied Catalysis A General》;20141231;第475卷(第5期);正文Ⅳ Appendices的E Article的Ⅲ DATA ACQUISITON的A. Skeleton Tracking Systems的第1段和C. Dual Kinects、IV. RESULTS AND DISCUSSION的第2段和B. Dual Kinect的第1段 *
Multi-Kinect Skeleton Fusion for Physical Rehabilitation Monitoring;Saiyi Li 等;《Engineering in Medicine & Biology Society》;20141231;全文 *
Tracking People across Multiple Non-Overlapping RGB-D Sensors;Emilio J. Almazan 等;《Computer Vision & Pattern Recognition Workshops》;20131231;第13卷(第4期);全文 *

Also Published As

Publication number Publication date
CN104933734A (en) 2015-09-23

Similar Documents

Publication Publication Date Title
CN104933734B (en) A kind of human body attitude data fusion method based on more kinect
KR101711736B1 (en) Feature extraction method for motion recognition in image and motion recognition method using skeleton information
CN105930767B (en) A kind of action identification method based on human skeleton
CN103941866B (en) Three-dimensional gesture recognizing method based on Kinect depth image
CN110363867B (en) Virtual decorating system, method, device and medium
CN102184541B (en) Multi-objective optimized human body motion tracking method
JP2021524113A (en) Image processing methods and equipment, imaging equipment, and storage media
CN104616028B (en) Human body limb gesture actions recognition methods based on space segmentation study
EP3628380B1 (en) Method for controlling virtual objects, computer readable storage medium and electronic device
CN110570455A (en) Whole body three-dimensional posture tracking method for room VR
CN105159452B (en) A kind of control method and system based on human face modeling
CN107992858A (en) A kind of real-time three-dimensional gesture method of estimation based on single RGB frame
CN108829233B (en) Interaction method and device
Wu et al. Incorporating motion analysis technology into modular arrangement of predetermined time standard (MODAPTS)
CN116052276A (en) Human body posture estimation behavior analysis method
CN115576426A (en) Hand interaction method for mixed reality flight simulator
Huang et al. Intelligent yoga coaching system based on posture recognition
Zou et al. Automatic reconstruction of 3D human motion pose from uncalibrated monocular video sequences based on markerless human motion tracking
Yin et al. A systematic review on digital human models in assembly process planning
CN112602100B (en) Action analysis device, action analysis method, storage medium, and action analysis system
Alexanderson et al. Towards Fully Automated Motion Capture of Signs--Development and Evaluation of a Key Word Signing Avatar
CN112541870A (en) Video processing method and device, readable storage medium and electronic equipment
CN203630717U (en) Interaction system based on a plurality of light inertial navigation sensing input devices
CN111310655A (en) Human body action recognition method and system based on key frame and combined attention model
US11887257B2 (en) Method and apparatus for virtual training based on tangible interaction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20171128

Termination date: 20200626

CF01 Termination of patent right due to non-payment of annual fee