CN110633005A - Optical unmarked three-dimensional human body motion capture method - Google Patents

Optical unmarked three-dimensional human body motion capture method Download PDF

Info

Publication number
CN110633005A
CN110633005A CN201910262689.8A CN201910262689A CN110633005A CN 110633005 A CN110633005 A CN 110633005A CN 201910262689 A CN201910262689 A CN 201910262689A CN 110633005 A CN110633005 A CN 110633005A
Authority
CN
China
Prior art keywords
human body
joint points
dimensional
human
joint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910262689.8A
Other languages
Chinese (zh)
Inventor
陈文颉
游清
李晔
陈杰
窦丽华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Beijing Institute of Technology BIT
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201910262689.8A priority Critical patent/CN110633005A/en
Publication of CN110633005A publication Critical patent/CN110633005A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention provides an optical unmarked three-dimensional human body motion capture method, which solves the problem of the constraint of the traditional non-optical and optical marked human body motion capture methods on human body motion and reduces the time for capturing the single human body motion to a certain extent. The method comprises the following steps: determining the connection relation between the serial number of the human body joint points and each joint point by using the positions of different joint points and the corresponding relation between the positions of the different joint points and the real human body; extracting two-dimensional joint points of the multi-angle human body image and connecting the limb bones between the joint points by utilizing a deep convolutional neural network and combining the serial numbers and the connection relation to obtain two-dimensional coordinate information of the human body joint points; and drawing a human skeleton model in a three-dimensional space by using the conversion relation among different coordinate systems and the two-dimensional coordinate information of the human joint points, so that the skeleton model reflects the real human posture motion information in the three-dimensional space, and the motion capture method is convenient to be subsequently used in the field of human motion analysis.

Description

Optical unmarked three-dimensional human body motion capture method
Technical Field
The invention provides an optical unmarked three-dimensional human body motion capture method, and belongs to the field of 3D human body motion capture.
Background
The 3D human body motion capture means that three-dimensional space coordinates and three-dimensional space distribution of human body joint points are obtained by sensing through a sensor and calculating through a background program by using a certain method or means, the connection between the points is carried out according to the mutual connection relation among all the joint points, and finally a human body skeleton model is drawn under a three-dimensional space coordinate system, so that the aim of capturing human body motion is fulfilled.
Currently, this direction of research is a focus of research in the field of computer vision. Different from the traditional method (also called non-optical 3D human body posture detection) for detecting the human body posture through a wearable sensor (Huohua, Li climber. wearable human body posture detection system design [ J ]. electronic technology application, 2017, 9: 13-16.), the application range of the optical label-free 3D human body motion capture system is not limited to places with strong specialties such as laboratories and industrial measurement, but can be widely applied to aspects such as animation production, physical training and household somatosensory interaction interfaces, and the application range is wide and deep.
The optical label-free 3D human body motion capture technology has wide requirements in various fields, and further, relevant researchers are encouraged to research in the fields of human body motion modeling, recognition and the like, and the technology is continuously promoted to improve.
At present, the method which is applied in the field of three-dimensional human motion capture is to capture motion by using an optical marking technology, namely, an optical marking point is pasted on a human body, and then a motion track of the optical marking point is captured by using a visual sensor and a background program, so that a three-dimensional human motion model is established. The method used by the people is an optical label-free technology, and the method which is applied more in the field is that the two-dimensional coordinates of the target are estimated by utilizing a deep convolution neural network, then 2N-3N regression of Cartesian joint coordinates is executed, or using a 2D-to-3D NxN Distance Matrix (France sc Moreno-noguer.3DHuman Pose Estimation from a Single Image via Distance Matrix Regulation. InCVPR,2017.), namely, the whole algorithm is composed of two parts of a 2D joint point extraction network and a 2D-3D coordinate transformation network, therefore, the method for directly obtaining the three-dimensional posture from the two-dimensional image through the regression algorithm or the model is more intuitive in design, the program running time is generally short, but because the front network and the rear network are mutually associated, accumulative errors can be generated to a certain extent, and the final result precision is not very high; and the other main method is to finally calculate the three-dimensional positioning of the target by aiming at the matching depth of the input 2D image through a three-dimensional database (Hang Chen, Deva Raman. 3D Human Point Estimation + matching. in
CVPR, 2017)), which is somewhat less time consuming than the former, but the end result is more accurate.
Meanwhile, in the field of three-dimensional human motion capture, monocular vision and monocular vision are distinguished, namely, the difference between shooting of human posture images by using a single camera and shooting of images by using a plurality of cameras is distinguished.
In the monocular vision method, the hardware platform is simple and convenient to set up, but the result precision is not high, and the problem of shielding cannot be effectively solved. Compared with a monocular vision method, the monocular vision method has the advantages that the positioning result is more accurate, the shielding problem can be effectively solved, and meanwhile, the algorithm development difficulty is relatively high, so that the research blank in the field is still more.
Disclosure of Invention
The invention provides an optical unmarked three-dimensional human body motion capture method, which solves the problem of the constraint of the traditional non-optical and optical marked human body motion capture methods on human body motion and reduces the time for capturing the single human body motion to a certain extent.
The invention is realized by the following technical scheme:
an optical label-free three-dimensional human body motion capturing method comprises the following steps:
determining the connection relation between the serial number of the human body joint points and each joint point by using the positions of different joint points and the corresponding relation between the positions of the different joint points and the real human body;
extracting two-dimensional joint points of the multi-angle human body image and connecting the limb bones between the joint points by utilizing a deep convolutional neural network and combining the serial numbers and the connection relation to obtain two-dimensional coordinate information of the human body joint points;
and drawing a human skeleton model in a three-dimensional space by using the conversion relation among different coordinate systems and the two-dimensional coordinate information of the human joint points, so that the skeleton model reflects the real human posture motion information in the three-dimensional space, and the motion capture method is convenient to be subsequently used in the field of human motion analysis.
Further, the joint points are added with two joint points of the tiptoes of the feet of the human body.
Further, the joint points include 18 human joint points.
Further, the following method is adopted for determining the serial numbers and the connection relations of the human body joint points by utilizing the positions of different joint points and the corresponding relations between the different joint points and the real human body: firstly, determining the position of a human body joint point to be extracted and the corresponding relation between the joint point and an actual human body skeleton, namely determining the position of the joint point on the human body skeleton; and then coding the joint points for the corresponding relation, determining the serial numbers of the human body joint points, and determining the connection mode between the joint points according to the real human body limb and bone direction.
Further, the method for extracting two-dimensional joint points of a multi-angle human body image and connecting limb bones among the joint points by using the deep convolutional neural network and combining the serial numbers and the connection relations specifically comprises the following steps:
firstly, transmitting human body posture images shot from different angles into a depth convolution neural network;
extracting two-dimensional coordinates of the human body joint points by using a confidence map in the deep convolutional neural network;
thirdly, judging the actual limb direction between the human body joint points by using a part of the limb relation vector field in the deep convolutional neural network and the connection relation of the human body joint points;
connecting the extracted human body joint points in the image by using a greedy analysis algorithm in a deep convolutional neural network;
and fifthly, displaying the joint point extraction result and the connection result on the original human body posture image as a skeleton model of the two-dimensional human body posture.
Further, the following method is adopted for correctly drawing the human skeleton model in the three-dimensional space by utilizing the conversion relationship among different coordinate systems and the two-dimensional coordinate information of the human joint points: the method comprises the steps of utilizing two-dimensional coordinate information of human body joint points, deducing conversion relations among an image pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system to obtain an equation set containing unknown three-dimensional coordinates of the joint points, utilizing a least square method to solve equations to obtain the three-dimensional coordinates of the human body joint points, and utilizing coding sequences and mutual connection relations of the joint points to draw a three-dimensional human body skeleton model.
The invention has the beneficial effects that:
(1) the invention utilizes the positions of different joint points and the corresponding relation between the joint points and the real human body, and determines the serial numbers and the connection relation of the extracted human body joint points according to the actual human body limb direction, thereby avoiding the common defects of a non-optical motion capture method and an optical marked motion capture method, namely, the defect that a user needs to wear a sensor or paste an optical marked point to further limit the limb motion of the user, ensuring that the motion of the user is not bound, and expanding the motion posture range which can be displayed by the human body;
(2) compared with 16 joint points which can be extracted by the existing method, the invention increases two joint points of the toes of the feet of the human body, so that the number of the extracted joint points reaches 18, and the final human skeleton model is closer to the actual human posture, can more clearly and accurately reflect the limb actions, and is more suitable for the auxiliary training in physical training projects such as walking race and the like;
(3) the invention utilizes the two-dimensional coordinate information of the human body joint points, obtains a parameter-containing equation by deducing the transformation relation of a coordinate system, obtains the three-dimensional coordinate information corresponding to the human body joint points by solving the equation by using a least square method, and then utilizes the coding sequence and the interconnection relation of the joint points to draw a three-dimensional human body skeleton model. The whole algorithm design flow is easy to realize; meanwhile, the program execution speed is accelerated to a certain extent, and the requirement on real-time performance is met conveniently; moreover, the most outstanding advantage is that based on the binocular vision algorithm, the follow-up multi-view vision algorithm can be easily realized, so that the final precision of the 3D human posture action capture data is improved by increasing the number of cameras, and the problem of shielding which cannot be avoided by a system built by fewer cameras is solved.
Drawings
FIG. 1 is a schematic flow chart of an optical label-free three-dimensional human body motion capture method according to the present invention;
FIG. 2 is a flow chart of the encoding method including 18 human joint points according to the present invention;
FIG. 3 is a schematic flow chart of a method for generating two-dimensional human body posture data by extracting 18 joint points from multiple angles according to the present invention;
FIG. 4 is a schematic flow chart of a method for generating a three-dimensional human body posture skeleton model by binocular vision fusion according to the present invention;
FIG. 5 is a schematic diagram of the human joint coding and connection relationship of the present invention.
Detailed Description
The invention is described in detail below with reference to the drawings and specific examples, but the invention is not limited thereto.
FIG. 1 is a flow chart of an optical label-free three-dimensional human body motion capture method according to the present invention. In this embodiment, the method includes the following parts:
firstly, performing three-dimensional calibration on two cameras used in the system by using a Zhangyingyou calibration method to obtain an internal parameter matrix and an external parameter matrix of the two cameras, and a rotation matrix and a translation matrix between the two cameras;
secondly, determining the serial numbers and the connection relations of the 18 extracted human body joint points by utilizing the positions of different joint points and the corresponding relations between the joint points and the real human body and according to the actual human body limb direction;
thirdly, extracting two-dimensional joint points of the multi-angle human body image and working of limb skeleton connecting lines among the joint points by utilizing a deep convolutional neural network and combining the obtained serial numbers and connection relations of 18 human body joint points, and correctly drawing human body skeleton models in the human body posture images at different angles;
and fourthly, obtaining a parameter-containing equation by utilizing the two-dimensional coordinate information of the human body joint points and deducing the conversion relation among the image pixel coordinate system, the image coordinate system, the camera coordinate system and the world coordinate system, obtaining three-dimensional coordinate information corresponding to the human body joint points by solving the equation by using a least square method, and drawing a three-dimensional human body skeleton model by utilizing the coding sequence and the interconnection relation of the joint points.
The flow and steps of each part will be described in detail below.
In the camera calibration module, the invention adopts a Zhang Zhengyou calibration method to carry out three-dimensional calibration on two cameras. In the system, two cameras are arranged at the left and the right, and the focal directions of the two cameras form an angle with each other, wherein the angle is between 0 and 60 degrees. After the cameras are placed at the positions, the left camera and the right camera are used for shooting 21 groups of calibration plate images simultaneously, wherein the calibration plate image of each frame is one for the left camera and the right camera; then, the 21 groups of calibration images are transmitted into a Zhang Zhengyou calibration algorithm, and an internal parameter matrix K and an external parameter matrix R of the two cameras are solved by the algorithm3×3And t3×1And a rotation matrix and a translation matrix between the two cameras are convenient for subsequent calculation of the three-dimensional space coordinate information of the human body joint points.
FIG. 2 is a flow chart of the encoding method including 18 human joint points according to the present invention. The method mainly comprises the following steps:
s11 is to determine the expected 18 human joint point positions to be extracted and the corresponding relation between the extracted joint points and the actual human skeleton, namely determining the positions of the joint points on the human skeleton;
s12 is according to the corresponding relation determined by S11 to proceed joint point coding work, determine the serial number of 18 obtained human body joint points, and determine the connection mode between the joint points according to the real human body limb and skeleton direction.
Fig. 5 is a schematic diagram of the determined encoding sequence and connection relationship of the human body joint points. It can be seen that the optical unmarked three-dimensional human body motion capture method provided by the invention extracts 18 human body joint points so as to accurately and clearly reflect the motion posture of the human body.
FIG. 3 is a schematic flow chart of a method for generating two-dimensional human body posture data by extracting 18 joint points from multiple angles according to the present invention. In this section, we refer to a method (Zhe Cao, Tomas Simon, Shih-En Wei, Yaser sheikh, real Multi-Person 2D dose estimation using Part Affinity fields, in CVPR,2017) proposed in a paper published on CVPR in 2017, and reproduce the algorithm for detecting the 2D body posture on the GPU, and perform some optimization and improvement on the algorithm, so that two joint points, namely, two toes of the human body, can be extracted more, and the human body skeleton model expected to be established is more fit to the actual body posture. The method mainly comprises the following steps:
s21 is that the human posture images of different angles shot by the left and right cameras in the system are transmitted into the trained deep convolution neural network;
s22 is that the confidence map branch in the double-branch multilayer convolution neural network is used to extract 18 human body joint points in the two-dimensional human body posture image, and the two-dimensional pixel coordinate information of the human body joint points is obtained;
s23 is that aiming at the extracted two-dimensional human body joint points, the actual limb direction between the human body joint points is judged by using Part Affinity Fields (PAFs) and the connection relation of the 18 human body joint points determined in S1, and the direction vector of the limb direction is given;
s24, connecting every two joint points obtained from S22, calculating direction vectors of each connecting line, and then calculating by using real limb direction vectors between Greedy matching Algorithm and the joint points obtained from S23 one by one, wherein the connecting direction between the joint points corresponding to the maximum value of the calculation result is the correct direction, and so on, calculating the real connecting relationship between the joint points;
s25 is a skeleton model for displaying the joint point extraction result and the connection result on the original human body posture image as a two-dimensional human body posture.
FIG. 4 is a schematic flow chart of a method for generating a three-dimensional human body posture skeleton model through binocular vision fusion according to the present invention. The method comprises the steps of utilizing two-dimensional coordinate information of human body joint points, deriving a parameter-containing equation through conversion relations among an image pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system, solving the equation by using a least square method to obtain three-dimensional coordinate information corresponding to the human body joint points, and then utilizing coding sequence and interconnection relations of the joint points to draw a three-dimensional human body skeleton model. The method mainly comprises the following steps:
s31 is firstly deducing an image coordinate system O1Xy and image pixel coordinate system O0The conversion relationship between-uv is as follows
In the formula, dx represents the width of the unit pixel in the x-axis direction, and dy represents the width of the unit pixel in the y-axis direction. The camera coordinate system O can then be deducedc-xcyczcAnd the image coordinate system O1-the conversion relation between xy is
Figure BDA0002015800990000082
Wherein f is the camera focal length. From this, a camera coordinate system O can be obtainedc-xcyczcAnd the image pixel coordinate system O0The conversion relation between uv is
Figure BDA0002015800990000083
In the formula, the K matrix is the intrinsic parameter matrix of the camera. The external parameter matrix of the camera is known as a world coordinate system Ow-xwywzwAnd camera coordinate system Oc-xcyczcA transformation matrix between two matrices, one is a rotation matrix R 3×33 × 3 matrix; the other is a translation matrix t3×1And is a 3 × 1 matrix. The conversion relationship between the two coordinate systems is as follows
Figure BDA0002015800990000084
From the formulas (3) and (4), a world coordinate system Ow-xwywzwAnd the image pixel coordinate system O0The conversion relation between uv is
Figure BDA0002015800990000091
Thus, we define the following M matrix
Figure BDA0002015800990000092
Then for the left and right cameras, the M matrix is as follows
Figure BDA0002015800990000093
Figure BDA0002015800990000094
Let the pixel coordinate values of the same human body joint point obtained by the method 2 in the left and right cameras be (u)1,v1),(u2,v2) The three-dimensional coordinate of the joint point in the world coordinate system is (x)w,yw,zw). The following system of equations can be derived from the above derivation
Figure BDA0002015800990000095
The formula (9) is the over-determined equation set containing the unknown three-dimensional coordinates of the joint points;
s32, solving the overdetermined equation set obtained in S31 by using a least square method to obtain the three-dimensional space coordinates of the human body joint points;
s33, drawing a joint point distribution map under a world coordinate system according to the three-dimensional coordinates of the human joint points calculated by S32, connecting lines according to the connection relations among the joint points and the corresponding joint point serial numbers, and finally drawing a three-dimensional human skeleton model.

Claims (6)

1. An optical label-free three-dimensional human body motion capture method is characterized by comprising the following steps:
determining the connection relation between the serial number of the human body joint points and each joint point by using the positions of different joint points and the corresponding relation between the positions of the different joint points and the real human body;
extracting two-dimensional joint points of the multi-angle human body image and connecting the limb bones between the joint points by utilizing a deep convolutional neural network and combining the serial numbers and the connection relation to obtain two-dimensional coordinate information of the human body joint points;
and drawing a human skeleton model in a three-dimensional space by using the conversion relation among different coordinate systems and the two-dimensional coordinate information of the human joint points, so that the skeleton model reflects the real human posture motion information in the three-dimensional space, and the motion capture method is convenient to be subsequently used in the field of human motion analysis.
2. The method of claim 1, wherein the articulation point adds two articulation points to the toes of the human foot.
3. The method of claim 2, wherein the joint points comprise 18 human joint points.
4. The method according to claim 1, 2 or 3, wherein the following method is adopted for determining the serial numbers and the connection relations of the human joint points by utilizing the positions of different joint points and the corresponding relations between the different joint points and the real human body: firstly, determining the position of a human body joint point to be extracted and the corresponding relation between the joint point and an actual human body skeleton, namely determining the position of the joint point on the human body skeleton; and then coding the joint points for the corresponding relation, determining the serial numbers of the human body joint points, and determining the connection mode between the joint points according to the real human body limb and bone direction.
5. The method as claimed in claim 1, 2 or 3, wherein the step of performing two-dimensional joint point extraction of multi-angle human body images and limb bone connecting lines between joint points by using the deep convolutional neural network in combination with numbering and connection relations specifically comprises the following steps:
firstly, transmitting human body posture images shot from different angles into a depth convolution neural network;
extracting two-dimensional coordinates of the human body joint points by using a confidence map in the deep convolutional neural network;
thirdly, judging the actual limb direction between the human body joint points by using a part of the limb relation vector field in the deep convolutional neural network and the connection relation of the human body joint points;
connecting the extracted human body joint points in the image by using a greedy analysis algorithm in a deep convolutional neural network;
and fifthly, displaying the joint point extraction result and the connection result on the original human body posture image as a skeleton model of the two-dimensional human body posture.
6. The method as claimed in claim 1, 2 or 3, wherein the following method is adopted for correctly drawing the human skeleton model in the three-dimensional space by using the conversion relation between different coordinate systems and the two-dimensional coordinate information of the human joint points: the method comprises the steps of utilizing two-dimensional coordinate information of human body joint points, deducing conversion relations among an image pixel coordinate system, an image coordinate system, a camera coordinate system and a world coordinate system to obtain an equation set containing unknown three-dimensional coordinates of the joint points, utilizing a least square method to solve equations to obtain the three-dimensional coordinates of the human body joint points, and utilizing coding sequences and mutual connection relations of the joint points to draw a three-dimensional human body skeleton model.
CN201910262689.8A 2019-04-02 2019-04-02 Optical unmarked three-dimensional human body motion capture method Pending CN110633005A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910262689.8A CN110633005A (en) 2019-04-02 2019-04-02 Optical unmarked three-dimensional human body motion capture method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910262689.8A CN110633005A (en) 2019-04-02 2019-04-02 Optical unmarked three-dimensional human body motion capture method

Publications (1)

Publication Number Publication Date
CN110633005A true CN110633005A (en) 2019-12-31

Family

ID=68968037

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910262689.8A Pending CN110633005A (en) 2019-04-02 2019-04-02 Optical unmarked three-dimensional human body motion capture method

Country Status (1)

Country Link
CN (1) CN110633005A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291687A (en) * 2020-02-11 2020-06-16 青岛联合创智科技有限公司 3D human body action standard identification method
CN111528868A (en) * 2020-05-13 2020-08-14 湖州维智信息技术有限公司 Method for determining limb motion characteristic vector of child ADHD screening and evaluating system
CN111754620A (en) * 2020-06-29 2020-10-09 武汉市东旅科技有限公司 Human body space motion conversion method, conversion device, electronic equipment and storage medium
CN112183316A (en) * 2020-09-27 2021-01-05 中山大学 Method for measuring human body posture of athlete
CN113191934A (en) * 2021-04-15 2021-07-30 广州紫为云科技有限公司 Method and system for providing skeleton point data to upper layer engine
CN113421286A (en) * 2021-07-12 2021-09-21 北京未来天远科技开发有限公司 Motion capture system and method
WO2021185195A1 (en) * 2020-03-18 2021-09-23 深圳市瑞立视多媒体科技有限公司 Multi-thread-based motion capturing method and apparatus, device and storage medium
CN113487674A (en) * 2021-07-12 2021-10-08 北京未来天远科技开发有限公司 Human body pose estimation system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011206148A (en) * 2010-03-29 2011-10-20 Terumo Corp Three-dimensional human body model generating device, three- dimensional human body model generating method, and three-dimensional human body model generating program
CN105631861A (en) * 2015-12-21 2016-06-01 浙江大学 Method of restoring three-dimensional human body posture from unmarked monocular image in combination with height map
CN108647663A (en) * 2018-05-17 2018-10-12 西安电子科技大学 Estimation method of human posture based on deep learning and multi-level graph structure model
CN108829232A (en) * 2018-04-26 2018-11-16 深圳市深晓科技有限公司 The acquisition methods of skeleton artis three-dimensional coordinate based on deep learning
CN109325469A (en) * 2018-10-23 2019-02-12 北京工商大学 A kind of human posture recognition method based on deep neural network

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011206148A (en) * 2010-03-29 2011-10-20 Terumo Corp Three-dimensional human body model generating device, three- dimensional human body model generating method, and three-dimensional human body model generating program
CN105631861A (en) * 2015-12-21 2016-06-01 浙江大学 Method of restoring three-dimensional human body posture from unmarked monocular image in combination with height map
CN108829232A (en) * 2018-04-26 2018-11-16 深圳市深晓科技有限公司 The acquisition methods of skeleton artis three-dimensional coordinate based on deep learning
CN108647663A (en) * 2018-05-17 2018-10-12 西安电子科技大学 Estimation method of human posture based on deep learning and multi-level graph structure model
CN109325469A (en) * 2018-10-23 2019-02-12 北京工商大学 A kind of human posture recognition method based on deep neural network

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291687A (en) * 2020-02-11 2020-06-16 青岛联合创智科技有限公司 3D human body action standard identification method
CN111291687B (en) * 2020-02-11 2022-11-11 青岛联合创智科技有限公司 3D human body action standard identification method
WO2021185195A1 (en) * 2020-03-18 2021-09-23 深圳市瑞立视多媒体科技有限公司 Multi-thread-based motion capturing method and apparatus, device and storage medium
CN111528868A (en) * 2020-05-13 2020-08-14 湖州维智信息技术有限公司 Method for determining limb motion characteristic vector of child ADHD screening and evaluating system
CN111754620A (en) * 2020-06-29 2020-10-09 武汉市东旅科技有限公司 Human body space motion conversion method, conversion device, electronic equipment and storage medium
CN112183316A (en) * 2020-09-27 2021-01-05 中山大学 Method for measuring human body posture of athlete
CN112183316B (en) * 2020-09-27 2023-06-30 中山大学 Athlete human body posture measuring method
CN113191934A (en) * 2021-04-15 2021-07-30 广州紫为云科技有限公司 Method and system for providing skeleton point data to upper layer engine
CN113421286A (en) * 2021-07-12 2021-09-21 北京未来天远科技开发有限公司 Motion capture system and method
CN113487674A (en) * 2021-07-12 2021-10-08 北京未来天远科技开发有限公司 Human body pose estimation system and method
CN113421286B (en) * 2021-07-12 2024-01-02 北京未来天远科技开发有限公司 Motion capturing system and method
CN113487674B (en) * 2021-07-12 2024-03-08 未来元宇数字科技(北京)有限公司 Human body pose estimation system and method

Similar Documents

Publication Publication Date Title
CN110633005A (en) Optical unmarked three-dimensional human body motion capture method
CN106503671B (en) The method and apparatus for determining human face posture
CN104680582B (en) A kind of three-dimensional (3 D) manikin creation method of object-oriented customization
CN102697508B (en) Method for performing gait recognition by adopting three-dimensional reconstruction of monocular vision
CN104036488B (en) Binocular vision-based human body posture and action research method
CN109712172A (en) A kind of pose measuring method of initial pose measurement combining target tracking
CN106843507B (en) Virtual reality multi-person interaction method and system
CN104034269B (en) A kind of monocular vision measuring method and device
JP5795250B2 (en) Subject posture estimation device and video drawing device
CN108628306B (en) Robot walking obstacle detection method and device, computer equipment and storage medium
CN109758756B (en) Gymnastics video analysis method and system based on 3D camera
CN113362452B (en) Hand posture three-dimensional reconstruction method and device and storage medium
CN114119739A (en) Binocular vision-based hand key point space coordinate acquisition method
CN102479386A (en) Three-dimensional motion tracking method of upper half part of human body based on monocular video
Zou et al. Automatic reconstruction of 3D human motion pose from uncalibrated monocular video sequences based on markerless human motion tracking
CN111429571B (en) Rapid stereo matching method based on spatio-temporal image information joint correlation
CN113487674A (en) Human body pose estimation system and method
Fang et al. 3d human pose estimation using rgbd camera
CN112150609A (en) VR system based on indoor real-time dense three-dimensional reconstruction technology
Daniilidis et al. Real-time 3d-teleimmersion
CN113421286B (en) Motion capturing system and method
CN115035546A (en) Three-dimensional human body posture detection method and device and electronic equipment
CN114935316A (en) Standard depth image generation method based on optical tracking and monocular vision
CN112329723A (en) Binocular camera-based multi-person human body 3D skeleton key point positioning method
Cordea et al. 3-D head pose recovery for interactive virtual reality avatars

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20191231

WD01 Invention patent application deemed withdrawn after publication