CN105912985A - Human skeleton joint point behavior motion expression method based on energy function - Google Patents

Human skeleton joint point behavior motion expression method based on energy function Download PDF

Info

Publication number
CN105912985A
CN105912985A CN201610203252.3A CN201610203252A CN105912985A CN 105912985 A CN105912985 A CN 105912985A CN 201610203252 A CN201610203252 A CN 201610203252A CN 105912985 A CN105912985 A CN 105912985A
Authority
CN
China
Prior art keywords
articulare
energy
human
human skeleton
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610203252.3A
Other languages
Chinese (zh)
Inventor
王永雄
曾艳
魏国亮
宋燕
李璇
刘嘉莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Shanghai for Science and Technology
Original Assignee
University of Shanghai for Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Shanghai for Science and Technology filed Critical University of Shanghai for Science and Technology
Priority to CN201610203252.3A priority Critical patent/CN105912985A/en
Publication of CN105912985A publication Critical patent/CN105912985A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a human skeleton joint point behavior motion expression method based on the energy function. According to the method, firstly, the position information of human skeleton joint points is acquired through the video equipment; secondly, the kinetic energy and the potential energy of each human skeleton joint point and the interaction potential energy information of a figure are calculated, and human motion characteristics are expressed quantitatively; human behavior video sequence division is carried out through adjacent frame merging, adjacent frame merging is carried out on the basis of energy similarity of two adjacent frames, energy values of the two adjacent frames are calculated through human motion characteristics for comparison, when the similarity is smaller than the similarity threshold, the two frames of pictures belong to one division group, or the two frames of pictures belong to different motion division segments, and thereby multiple integral-motion-meaning sub motion video sequences are acquired. Through the method, video division accuracy and reliability can be greatly improved, and the method can be further applied to multiple aspects such as human motion identification and key frame extraction.

Description

The behavior act method for expressing of human skeleton articulare based on energy function
Technical field
The present invention relates to a kind of Image Information Processing technology, particularly to a kind of human bone based on energy function The behavior act method for expressing of frame articulare.
Background technology
Along with being widely used of video equipment and 3D camera, the identification of human body behavior act also becomes computer and regards One important subject in feel field.Rightly represent human body behavior act be one thorny and important Work, this has special meaning for monitoring, intelligent robot, man-machine interaction etc., it be accurately identify with Understand basis and the committed step of human body behavior act.At present, behavior representation method master based on human skeleton Have: use the features such as the skeleton framework of band joint angle, single articulare movement locus, speed to be used for representing The information such as the position of human action, motion and track, but these method for expressing are difficult to explain intuitively, and And different people to do same action be also discrepant, therefore articulare movement locus is not quite similar.
Through finding the retrieval of existing document, Ofli et al. delivers at J.Vis.Commun.Image R Sequence of the most informative joints(SMIJ):A new representation for The article of human skeletal action recognition proposes human body perform same action time, The use number of articulare is consistent with order, and uses entropy to describe the information of human action quantitatively, But using this action method for expressing is defective at some specific action, if human body is at time rotational, by Varying less in joint angle, therefore the varying less of entropy, be mistaken for resting state.And from the sense of people Property understanding on, such method for expressing is directly perceived not, the most bad division being directly used in video sequence.
Summary of the invention
The present invention be directed to present action method for expressing some specific action exist fail to judge problem, propose A kind of behavior act method for expressing of human skeleton articulare based on energy function, simply, fixed intuitively Scale is leted others have a look at body action, and is advantageously applied to the division of the action video sequence of people.Not only establish human body The kinetic energy of skeleton joint point, potential energy, and have also contemplated that the reciprocal action of people and thing, use an energy letter Number represents human body behavior act, compensate for the defect using entropy to represent quantity of information size, can describe intuitively The size of the form of the action of people, severe degree and quantity of information.
The technical scheme is that the behavior act table of a kind of human skeleton articulare based on energy function Show method, first pass through video equipment and obtain the positional information of human skeleton articulare;Then human bone is calculated The mutual potential energy information of kinetic energy, potential energy and the personage of each articulare of frame, represents the motion characteristic of people quantitatively; The sequence being merged into pedestrian's body behavior video by consecutive frame is divided, and consecutive frame merges the energy with adjacent two frames Similarity divides, and the energy value being calculated adjacent two frames by the motion characteristic of people is compared, and similarity is little When similarity threshold, this two frames picture belongs to same group of division, otherwise, then the action belonging to different divides Fragment, the final sub-action video sequence obtaining complete action meaning one by one.
The calculating of the kinetic energy of each articulare of described human skeleton: Kincet or Asus Xtion Pro Live is utilized to obtain human skeleton and the position of human skeleton articulation nodes Three-dimensional coordinate, Pi,t(xi,t,yi,t,zi,t) be video equipment gather t human skeleton i-th articulare position, (xi,t,yi,t,zi,t) it is three-dimensional coordinate, the unit of coordinate figure is rice, and by two frame articulare position Pi,tDifference or The mean value computation of Multi Frame Difference goes out movement velocity v of each nodei,t, then it is calculated as follows the dynamic of each articulare Can sum EK,t, kiFor the kinetic energy weights of corresponding articulare, FtIt it is the collection of t a certain frame picture intrinsic articulation point Close,
Wherein: EK,tRepresent the kinetic energy sum of the whole articulare of human body, v in t frame fragmenti,tRepresent at t The speed of i-th node in frame fragment, Δ t represents the time interval between adjacent two frames, and the unit of coordinate figure is Rice,Representing in the set of the whole node of human body, lower target i is expressed as i-th node, i=1 ..., N, n represent articulare number, and K represents the articulare coefficient of correspondence.
The potential energy of each articulare of described human skeleton calculates:
Wherein Pi,0(xi,0,yi,0,zi,0) represent human skeleton i-th articulare initial position, liGesture for corresponding articulare Energy weights, L represents the parameter of vector, is set as 1.
The mutual potential energy of described personage calculates:
E O P , t = M T ( P j , t - P j , t O ) - 2
WhereinThe position that t exists the jth object of interactive relation with human body is gathered for video equipment, Pj,tRepresenting that video equipment gathers t gravity center of human body position, M represents the parameter of vector, is set as 1.
The determination formula of described similarity threshold:
S i m ( t 1 , t 2 ) = ( E A P , t 1 + E O P , t 1 + E K , t 1 ) - ( E A P , t 2 + E O P , t 2 + E K , t 2 ) E K , t 1 + E A P , t 1 + E O P , t 1 .
The beneficial effects of the present invention is: the behavior of present invention human skeleton based on energy function articulare is moved Make method for expressing, not only establish the kinetic energy of human skeleton articulare, potential energy, and have also contemplated that people and thing Reciprocal action, use an energy function to represent human body behavior act, compensate for use entropy represent quantity of information The defect of size, can make to describe the size of the form of the action of people, severe degree and quantity of information intuitively Which solve the quantificational expression problem of human action, apply it to the sub-action of video sequence and divide, Ke Yi great Amplitude improves accuracy and the reliability that video divides, and is equally applicable to identification and the key of human action simultaneously The many aspects such as frame extraction.
Accompanying drawing explanation
Fig. 1 is that human skeleton of the present invention is by connecting rod and articulare composition diagram;
Fig. 2 is behavior act method for expressing flow chart of the present invention;
Fig. 3 is k of the present inventioniThe kinetic energy smoothed curve of=10 long actions and the running point detected and key frame figure;
Fig. 4 is that RGB image of the present invention display makes oatmeal behavior schematic diagram;
Fig. 5 is the results contrast figure that the present invention makes oatmeal behavior many seeds action division methods;
Fig. 6 is that the present invention drinks water the energy ordering comparison diagram of action difference articulare;
Fig. 7 is the energy ordering comparison diagram that the present invention wears contact lens action difference articulare.
Detailed description of the invention
This method first passes through video equipment and obtains the positional information of human skeleton articulare, then calculates human body The reciprocal action potential energy information of kinetic energy, potential energy and the personage of each articulare of skeleton, represents that people's is dynamic quantitatively Make feature.Finally this method for expressing is applied in the sub-action division of long video, thus obtains one by one The sub-action video sequence of complete action meaning.
The present embodiment we study Cambridge Microsoft Research initially with above-mentioned method for Microsoft The simple motion of Cambridge-12 (MSRC-12) action data collection is tested experiment, then to Connell Compound action in Dataset-1200 (CAD-120) data set of university, demonstrates our dividing method pair The effectiveness of compound action identification.
The present embodiment comprises the steps:
The first step, obtains the skeleton of human body from Kinect SDK software, and skeleton is to be connected by 19 to close successively The line segment of point and 20 articulation nodes composition, such as hands, cervical region, trunk, left shoulder, left hand elbow, the left palm, Right shoulders etc., human skeleton is by connecting rod and articulare composition diagram as shown in Figure 1.Thus obtain each moment Human skeleton articulation nodes position three-dimensional coordinate (x, y, z) (as shown in Figure 1), and by two frame differences calculate Go out the movement velocity of each node, be then calculated as follows the kinetic energy sum of each articulare:
Wherein: EK,tRepresent the kinetic energy sum of the whole articulare of human body in t frame fragment,
vi,tRepresenting the speed of i-th node in t frame fragment, Δ t represented between the time between adjacent two frames Every, FtRepresent the set of t a certain frame picture intrinsic articulation point,
xi,t、yi,t,、zi,tRepresent the D coordinates value of this moment t each articulare, the list of these coordinate figures respectively Position is rice,
Represent in the set of the whole node of human body, lower target i be expressed as i-th node (i=1 ..., N), n represents articulare number, is 20 here, K represent correspondence articulare coefficient (each articulare be Number is all specific).
Second step, utilize Kincet software or ASUS Xtion Pro Live software obtain human skeleton and The position three-dimensional coordinate of human skeleton articulation nodes (x, y, z) information, and calculate current joint point position and just Hand gesture location deviation E between beginning articulare positionApt,The i.e. potential energy of human body;Object is positioned human body simultaneously In the range of three dimensions around skeleton, then use object detector to obtain in a group RGB image and visited Object and human body are at the position relationship in each moment, according to formula EOptRepresent the mutual of human body and object Potential energy.
Described energy equation contains the mutual potential energy of the potential energy of human body and human body and object.In order to Being easy to observe the uniformity of adjacent segmentation point, we create two energy equations and represent human body respectively and connect Touch the potential energy of object, as follows:
E O P , t = M T ( P j , t - P j , t O ) - 2
Wherein: EAptAnd EOptRepresent potential energy and the mutual potential energy of personage, the P of t human body respectivelyi,0(xi,0,yi,0, zi,0) represent human skeleton i-th articulare initial position.It can be regarded as the nature of a human body by us Posture, as accompanying drawing 1 shows the posture that human body natural stands.T and human body is gathered for video equipment There is the position of the jth object of interactive relation, P in positionj,tRepresent that video equipment gathers t gravity center of human body institute In position, the parameter that L and M is vectorial in representing two formula respectively, liPotential energy power for corresponding human joint points Value.EAptRepresent the hand gesture location deviation between t current joint point position and initial articulare position, EOpt Represent the mutual potential energy of t human body and object.In test process, when we continue the maximum of merging group Between be limited in 20 seconds within.K=[10 ..., 10]T, and all L, the element of M vector parameter is disposed as 1。
3rd step, more complicated carries more identification information in order to sub-action similar for energy is collectively incorporated into In complete action sequence, the sequence that we are merged into pedestrian's body behavior video by consecutive frame divides, such as Fig. 2 Shown behavior act method for expressing flow chart.The energy similarity measure of adjacent two frames defines such as formula: when this similarity During less than similarity threshold, (we are the limit of maximum persistent period of the video combination after merging in this embodiment System is within 20 seconds, and similarity threshold is set to 0.85), time i.e. less than this similarity threshold, this two frames picture Belong to same group of division, otherwise, then the action belonging to different divides fragment.It is thus determined that similarity threshold is i.e. It it is the key of action division.Through experiment, we can obtain the determination formula of similarity threshold:
S i m ( t 1 , t 2 ) = ( E A P , t 1 + E O P , t 1 + E K , t 1 ) - ( E A P , t 2 + E O P , t 2 + E K , t 2 ) E K , t 1 + E A P , t 1 + E O P , t 1
Fig. 3 is kiThe kinetic energy smoothed curve of=10 long actions and the running point detected and key frame figure, Fig. 3 refers to Go out the scope that can merge.In an embodiment, kinetic energy and potential energy performance kinestate are used.In order to obtain Cut-point and key frame, we find Local Minimum and maximum kinetic energy point (2 seconds in given time interval By 6 seconds).These Local Minimum kinetic energy points are exactly the cut-point of sub-piece, and local maxima energy range is used to The similarity utilized when relatively merging adjacent segment.In test process, we are the video group after merging Within the conjunction maximum persistent period is limited in 20 seconds.Parameter be set to K=[10 ..., 10]T, and all L, M The element of vector parameter is disposed as 1, and similarity threshold is set to 0.85.Table 1 shows Microsoft The Video segmentation precision (meansigma methods) of Research Cambridge-12 (MSRC-12) action data collection, experiment Result shows that our method is that effect is fine.
Table 1
This RGB image display making this behavior of oatmeal contains four sub-actions as can be seen from Figure 4: mobile, Arriving, topple over, place, the results contrast of Fig. 5 many seeds action division methods, partial results derives from ginseng Examine document (H.Koppula, R.Gupta, and A.Saxena, Learning human activities and object affordances from RGB-D videos,The International Journal of Robotics Research, 32 (8): 951 970,2013) Fig. 6,7 be to drink water and wear contact lens The energy ordering comparison diagram of two action difference articulares.Fig. 6, Fig. 7 contrast can find different actions, people The Energy distribution of body articulare is different.

Claims (5)

1. the behavior act method for expressing of a human skeleton articulare based on energy function, it is characterised in that first First pass through video equipment and obtain the positional information of human skeleton articulare;Then each joint of human skeleton is calculated The mutual potential energy information of kinetic energy, potential energy and the personage of point, represents the motion characteristic of people quantitatively;By adjacent Frame is merged into the sequence of pedestrian's body behavior video and divides, and consecutive frame merges to come with the energy similarity measure of adjacent two frames Dividing, the energy value being calculated adjacent two frames by the motion characteristic of people is compared, and similarity is less than similarity During threshold value, this two frames picture belongs to same group of division, otherwise, then the action belonging to different divides fragment, Obtain the sub-action video sequence of complete action meaning one by one eventually.
The behavior act method for expressing of human skeleton articulare based on energy function the most according to claim 1, It is characterized in that, the calculating of the kinetic energy of each articulare of described human skeleton:
Kincet or Asus Xtion Pro Live is utilized to obtain human skeleton and the position of human skeleton articulation nodes Three-dimensional coordinate, Pi,t(xi,t,yi,t,zi,t) be video equipment gather t human skeleton i-th articulare position, (xi,t,yi,t,zi,t) it is three-dimensional coordinate, the unit of coordinate figure is rice, and by two frame articulare position Pi,tDifference or The mean value computation of Multi Frame Difference goes out movement velocity v of each nodei,t, then it is calculated as follows the dynamic of each articulare Can sum EK,t, kiFor the kinetic energy weights of corresponding articulare, FtIt it is the collection of t a certain frame picture intrinsic articulation point Close,
Wherein: EK,tRepresent the kinetic energy sum of the whole articulare of human body, v in t frame fragmenti,tRepresent at t The speed of i-th node in frame fragment, Δ t represents the time interval between adjacent two frames, and the unit of coordinate figure is Rice,Representing in the set of the whole node of human body, lower target i is expressed as i-th node, i=1 ..., N, n represent articulare number, and K represents the articulare coefficient of correspondence.
The behavior act method for expressing of human skeleton articulare based on energy function the most according to claim 2, It is characterized in that, the potential energy of each articulare of described human skeleton calculates:
Wherein Pi,0(xi,0,yi,0,zi,0) represent human skeleton i-th articulare initial position, liGesture for corresponding articulare Energy weights, L represents the parameter of vector, is set as 1.
The behavior act method for expressing of human skeleton articulare based on energy function the most according to claim 3, It is characterized in that, the mutual potential energy of described personage calculates:
E O P , t = M T ( P j , t - P j , t O ) - 2
WhereinThe position that t exists the jth object of interactive relation with human body is gathered for video equipment, Pj,tRepresenting that video equipment gathers t gravity center of human body position, M represents the parameter of vector, is set as 1.
The behavior act method for expressing of human skeleton articulare based on energy function the most according to claim 4, It is characterized in that, the determination formula of described similarity threshold:
S i m ( t 1 , t 2 ) = ( E A P , t 1 + E O P , t 1 + E K , t 1 ) - ( E A P , t 2 + E O P , t 2 + E K , t 2 ) E K , t 1 + E A P , t 1 + E O P , t 1 .
CN201610203252.3A 2016-04-01 2016-04-01 Human skeleton joint point behavior motion expression method based on energy function Pending CN105912985A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610203252.3A CN105912985A (en) 2016-04-01 2016-04-01 Human skeleton joint point behavior motion expression method based on energy function

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610203252.3A CN105912985A (en) 2016-04-01 2016-04-01 Human skeleton joint point behavior motion expression method based on energy function

Publications (1)

Publication Number Publication Date
CN105912985A true CN105912985A (en) 2016-08-31

Family

ID=56745216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610203252.3A Pending CN105912985A (en) 2016-04-01 2016-04-01 Human skeleton joint point behavior motion expression method based on energy function

Country Status (1)

Country Link
CN (1) CN105912985A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107943276A (en) * 2017-10-09 2018-04-20 广东工业大学 Based on the human body behavioral value of big data platform and early warning
CN108288015A (en) * 2017-01-10 2018-07-17 武汉大学 Human motion recognition method and system in video based on THE INVARIANCE OF THE SCALE OF TIME
CN108520250A (en) * 2018-04-19 2018-09-11 北京工业大学 A kind of human motion sequence extraction method of key frame
CN108564599A (en) * 2018-04-08 2018-09-21 广东省智能制造研究所 A kind of human motion speed estimation method
CN109858406A (en) * 2019-01-17 2019-06-07 西北大学 A kind of extraction method of key frame based on artis information
CN110059661A (en) * 2019-04-26 2019-07-26 腾讯科技(深圳)有限公司 Action identification method, man-machine interaction method, device and storage medium
CN110314344A (en) * 2018-03-30 2019-10-11 杭州海康威视数字技术股份有限公司 Move based reminding method, apparatus and system
CN110555387A (en) * 2019-08-02 2019-12-10 华侨大学 Behavior identification method based on local joint point track space-time volume in skeleton sequence
CN111144217A (en) * 2019-11-28 2020-05-12 重庆邮电大学 Motion evaluation method based on human body three-dimensional joint point detection
CN111641830A (en) * 2019-03-02 2020-09-08 上海交通大学 Multi-mode lossless compression implementation method for human skeleton in video
CN112205979A (en) * 2020-08-18 2021-01-12 同济大学 Device and method for measuring mechanical energy of moving human body in real time
CN112887792A (en) * 2021-01-22 2021-06-01 维沃移动通信有限公司 Video processing method and device, electronic equipment and storage medium
CN113283373A (en) * 2021-06-09 2021-08-20 重庆大学 Method for enhancing detection of limb motion parameters by depth camera
CN113407031A (en) * 2021-06-29 2021-09-17 国网宁夏电力有限公司 VR interaction method, system, mobile terminal and computer readable storage medium

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102500094A (en) * 2011-10-28 2012-06-20 北京航空航天大学 Kinect-based action training method

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102500094A (en) * 2011-10-28 2012-06-20 北京航空航天大学 Kinect-based action training method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
WANG YONGXIONG.ETC: "Human Activities Segmentation and Location of Key Frames Based on 3D Skeleton", 《PROCEEDINGS OF THE 33RD CHINESE CONTROL CONFERENCE》 *
彭淑娟: "基于中心距离特征的人体运动序列关键帧提取", 《系统仿真学报》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108288015A (en) * 2017-01-10 2018-07-17 武汉大学 Human motion recognition method and system in video based on THE INVARIANCE OF THE SCALE OF TIME
CN108288015B (en) * 2017-01-10 2021-10-22 武汉大学 Human body action recognition method and system in video based on time scale invariance
CN107943276A (en) * 2017-10-09 2018-04-20 广东工业大学 Based on the human body behavioral value of big data platform and early warning
CN110314344B (en) * 2018-03-30 2021-08-24 杭州海康威视数字技术股份有限公司 Exercise reminding method, device and system
CN110314344A (en) * 2018-03-30 2019-10-11 杭州海康威视数字技术股份有限公司 Move based reminding method, apparatus and system
CN108564599A (en) * 2018-04-08 2018-09-21 广东省智能制造研究所 A kind of human motion speed estimation method
CN108564599B (en) * 2018-04-08 2020-11-24 广东省智能制造研究所 Human body motion speed estimation method
CN108520250A (en) * 2018-04-19 2018-09-11 北京工业大学 A kind of human motion sequence extraction method of key frame
CN108520250B (en) * 2018-04-19 2021-09-14 北京工业大学 Human motion sequence key frame extraction method
CN109858406A (en) * 2019-01-17 2019-06-07 西北大学 A kind of extraction method of key frame based on artis information
CN109858406B (en) * 2019-01-17 2023-04-07 西北大学 Key frame extraction method based on joint point information
CN111641830B (en) * 2019-03-02 2022-03-15 上海交通大学 Multi-mode lossless compression implementation method and system for human skeleton in video
CN111641830A (en) * 2019-03-02 2020-09-08 上海交通大学 Multi-mode lossless compression implementation method for human skeleton in video
CN110059661B (en) * 2019-04-26 2022-11-22 腾讯科技(深圳)有限公司 Action recognition method, man-machine interaction method, device and storage medium
CN110059661A (en) * 2019-04-26 2019-07-26 腾讯科技(深圳)有限公司 Action identification method, man-machine interaction method, device and storage medium
CN110555387A (en) * 2019-08-02 2019-12-10 华侨大学 Behavior identification method based on local joint point track space-time volume in skeleton sequence
CN110555387B (en) * 2019-08-02 2022-07-19 华侨大学 Behavior identification method based on space-time volume of local joint point track in skeleton sequence
CN111144217A (en) * 2019-11-28 2020-05-12 重庆邮电大学 Motion evaluation method based on human body three-dimensional joint point detection
CN111144217B (en) * 2019-11-28 2022-07-01 重庆邮电大学 Motion evaluation method based on human body three-dimensional joint point detection
CN112205979A (en) * 2020-08-18 2021-01-12 同济大学 Device and method for measuring mechanical energy of moving human body in real time
CN112887792A (en) * 2021-01-22 2021-06-01 维沃移动通信有限公司 Video processing method and device, electronic equipment and storage medium
CN113283373A (en) * 2021-06-09 2021-08-20 重庆大学 Method for enhancing detection of limb motion parameters by depth camera
CN113283373B (en) * 2021-06-09 2023-05-05 重庆大学 Method for enhancing limb movement parameters detected by depth camera
CN113407031A (en) * 2021-06-29 2021-09-17 国网宁夏电力有限公司 VR interaction method, system, mobile terminal and computer readable storage medium
CN113407031B (en) * 2021-06-29 2023-04-18 国网宁夏电力有限公司 VR (virtual reality) interaction method, VR interaction system, mobile terminal and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN105912985A (en) Human skeleton joint point behavior motion expression method based on energy function
CN108596974B (en) Dynamic scene robot positioning and mapping system and method
CN103718175B (en) Detect equipment, method and the medium of subject poses
CN102184541B (en) Multi-objective optimized human body motion tracking method
CN104715493B (en) A kind of method of movement human Attitude estimation
CN100407798C (en) Three-dimensional geometric mode building system and method
CN107688391A (en) A kind of gesture identification method and device based on monocular vision
CN104616028B (en) Human body limb gesture actions recognition methods based on space segmentation study
CN103003846B (en) Articulation region display device, joint area detecting device, joint area degree of membership calculation element, pass nodular region affiliation degree calculation element and joint area display packing
CN102622766A (en) Multi-objective optimization multi-lens human motion tracking method
CN104573706A (en) Object identification method and system thereof
CN105536205A (en) Upper limb training system based on monocular video human body action sensing
CN104732203A (en) Emotion recognizing and tracking method based on video information
CN110288627A (en) One kind being based on deep learning and the associated online multi-object tracking method of data
CN104821010A (en) Binocular-vision-based real-time extraction method and system for three-dimensional hand information
CN109344790A (en) A kind of human body behavior analysis method and system based on posture analysis
CN109583294B (en) Multi-mode human behavior identification method based on motion biomechanics
She et al. A real-time hand gesture recognition approach based on motion features of feature points
CN102740096A (en) Space-time combination based dynamic scene stereo video matching method
Haggag et al. Body parts segmentation with attached props using rgb-d imaging
CN103839280B (en) A kind of human body attitude tracking of view-based access control model information
CN105654061A (en) 3D face dynamic reconstruction method based on estimation compensation
CN112149531B (en) Human skeleton data modeling method in behavior recognition
Kondori et al. Direct hand pose estimation for immersive gestural interaction
CN107392163A (en) A kind of human hand and its object interaction tracking based on the imaging of short Baseline Stereo

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160831

WD01 Invention patent application deemed withdrawn after publication