CN110544301A - Three-dimensional human body action reconstruction system, method and action training system - Google Patents

Three-dimensional human body action reconstruction system, method and action training system Download PDF

Info

Publication number
CN110544301A
CN110544301A CN201910843462.2A CN201910843462A CN110544301A CN 110544301 A CN110544301 A CN 110544301A CN 201910843462 A CN201910843462 A CN 201910843462A CN 110544301 A CN110544301 A CN 110544301A
Authority
CN
China
Prior art keywords
human body
dimensional
action
motion
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910843462.2A
Other languages
Chinese (zh)
Inventor
梁鑫
李培杰
杜钦涛
张炜乐
樊奕良
李烁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201910843462.2A priority Critical patent/CN110544301A/en
Publication of CN110544301A publication Critical patent/CN110544301A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/16Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Optimization (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Pure & Applied Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Mathematics (AREA)
  • Mathematical Analysis (AREA)
  • Geometry (AREA)
  • Computer Graphics (AREA)
  • Algebra (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

the application discloses a three-dimensional human body action reconstruction system, a three-dimensional human body action reconstruction method and an action training system, wherein a corrected binocular camera based on binocular vision is used for capturing an action image of a target human body, two-dimensional human body joint points on the two-dimensional image are identified for the action image, the three-dimensional joint points of the target human body are reconstructed based on binocular stereo vision and the two-dimensional human body joint points, so that the three-dimensional action of the target human body is reconstructed, a depth map is generated without complex graphic transformation, errors caused in a depth change process can be avoided, objects around the target human body are not identified as a part of the target human body by mistake, the requirement of an application scene is reduced, and the technical problems that the existing depth camera human body three-dimensional action identification method is high in requirement on the application scene and low in identification accuracy rate in the complex application scene are solved.

Description

Three-dimensional human body action reconstruction system, method and action training system
Technical Field
the application relates to the technical field of motion recognition, in particular to a three-dimensional human motion reconstruction system, a three-dimensional human motion reconstruction method and a motion training system.
Background
with the development of artificial intelligence, the human body gesture recognition technology makes an important breakthrough on the basis of a larger data set and strong computing power available in the big data era.
the existing three-dimensional human body motion recognition method is to utilize a depth camera to capture human body three-dimensional motion and then evaluate whether the motion meets the standard, and utilizes the depth camera to capture the human body three-dimensional motion. However, the method for capturing the three-dimensional motion of the human body by using the depth camera has high requirements on application scenes, and in complex application scenes (such as disordered rooms and scenes with strong outdoor light), objects around the human body are mistakenly recognized as parts of the human body with a certain probability, so that errors are generated in the judgment of the three-dimensional motion of the human body, and the recognition accuracy is reduced.
Disclosure of Invention
The application aims to provide a three-dimensional human body action reconstruction system, a three-dimensional human body action reconstruction method and an action training system, which are used for solving the technical problems that the existing depth camera human body three-dimensional action recognition method has higher requirements on application scenes and has lower recognition accuracy in complex application scenes.
the present application provides in a first aspect a three-dimensional human motion reconstruction system, comprising:
The camera calibration module is used for calibrating the binocular camera according to the collected calibration point data;
the camera stereo correction module is used for carrying out stereo correction on the calibrated camera parameters after the binocular camera calibration based on a Bougue algorithm so that the binocular camera can obtain parallel coplanar images of left and right visual angles of the binocular camera at the same moment;
the binocular camera is used for acquiring a target human body action image sequence and sending the action image sequence to the two-dimensional human body action recognition module;
the two-dimensional human body motion recognition module is used for recognizing key human body joint parts in the motion image sequence, extracting two-dimensional joint points which belong to the same target human body in each frame of the motion image sequence so as to reconstruct a human body skeleton of the target human body, and storing and sending the human body skeleton information of each frame to the three-dimensional motion reconstruction module;
And the three-dimensional motion reconstruction module is used for restoring the real position of the target human body in a three-dimensional space by combining binocular stereoscopic vision based on the human body skeleton information in each frame, and reconstructing the three-dimensional motion of the target human body.
optionally, the camera calibration module is specifically configured to:
And carrying out calibration pretreatment on the binocular camera according to the collected calibration point data, and calibrating the binocular camera based on the obtained internal parameter matrix, external parameter matrix and three-dimensional correction parameters of the binocular camera through the pretreatment.
Optionally, the camera stereo correction module is specifically configured to:
And performing stereo correction on the internal parameter matrix and the external parameter matrix based on a Bougue algorithm according to the reference image sequence shot by the binocular camera, so that reference images of the reference image sequence shot by the binocular camera after the stereo correction are parallel and coplanar at the same moment.
Optionally, the two-dimensional human body motion recognition module is specifically configured to:
And identifying key human body joint parts in the action image sequence based on a preset convolutional neural network, extracting 25 joint points belonging to the same target human body in each frame of the action image sequence so as to reconstruct the human body skeleton of the target human body, and storing and sending the human body skeleton information of each frame to a three-dimensional action reconstruction module.
Optionally, the three-dimensional motion reconstruction module is specifically configured to:
restoring the human body skeleton information in each frame to the real position of the target human body in the three-dimensional space based on a preset back projection matrix, and reconstructing the three-dimensional action of the target human body.
The second aspect of the present application further provides a three-dimensional human body motion reconstruction method, including:
Performing camera calibration on a binocular camera based on collected calibration point data to obtain calibration camera parameters, wherein the calibration camera parameters comprise an internal parameter matrix, an external parameter matrix and a three-dimensional correction parameter;
Performing stereo correction on the binocular camera based on a Bougue algorithm and the calibrated camera parameters, so that the binocular camera can obtain parallel coplanar images of left and right visual angles of the binocular camera at the same moment;
Acquiring a motion image sequence of a target human body through the binocular camera;
identifying key body joint parts of the human body in the action image sequence, and extracting two-dimensional joint points which belong to the same target human body in each frame of the action image sequence so as to reconstruct the human body skeleton of the target human body and obtain the human body skeleton information of each frame;
and restoring the real position of the target human body in the three-dimensional space by combining binocular stereo vision based on the human body skeleton information in each frame, and reconstructing the three-dimensional action of the target human body.
A third aspect of the present application provides a motion training system, including any one of the three-dimensional human motion reconstruction systems of the first aspect, further including a motion evaluation module;
and the action evaluation module is used for comparing the three-dimensional action of the target human body acquired from the three-dimensional action reconstruction module with a preset standard action in a difference mode and outputting an action evaluation result corresponding to the difference comparison result.
optionally, the action evaluation module is specifically configured to:
Performing joint point combination angle similarity judgment, average curvature comparison of joint point combination motion tracks and human body joint point combination motion quantity comparison on the three-dimensional motion of the target human body acquired from the three-dimensional motion reconstruction module and a preset standard motion;
Acquiring a first action evaluation result corresponding to the result of the joint point combination angle similarity judgment, a second action evaluation result corresponding to the result of the average curvature comparison of the joint point combination motion trail and a third action evaluation result corresponding to the result of the human body joint point combination motion quantity comparison;
and performing weighting processing on the first action evaluation result, the second action evaluation result and the third action evaluation result, and outputting action evaluation results obtained after weighting processing.
optionally, the method further includes: a music rhythm integrating degree module;
the music rhythm integrating module is used for extracting the music characteristics of the music on broadcasting based on an audio extraction algorithm, judging the matching degree of the human body actions of the target human body in continuous frames and the preset standard actions in the series of actions matching the beat by taking the beat as a unit, and outputting a beat matching result.
Optionally, the method further includes: a health guarding module;
The health guarding module is used for calculating the physical quality degree, the proper dancing type and the current training exercise amount of the user through a cloud platform algorithm according to the user information of the target human body, recommending the most proper action set of the user according to the calculation result in a personalized high-quality mode, avoiding the action with too weak or too high training intensity of the user, recording the training time and the training intensity of the user every time, and prompting the user to have a rest through a voice broadcasting and/or display popup window mode.
the application provides a three-dimensional human action system of rebuilding, includes: the camera calibration module is used for calibrating the binocular camera according to the collected calibration point data; the camera stereo correction module is used for carrying out stereo correction on the calibrated camera parameters after the binocular camera calibration based on a Bougue algorithm so that the binocular camera can obtain parallel coplanar images of left and right visual angles of the binocular camera at the same moment; the motion acquisition module is used for acquiring a motion image sequence of the target human body through the binocular camera and sending the motion image sequence to the two-dimensional human body motion recognition module; the two-dimensional human body action recognition module is used for recognizing key limb joint parts of a human body in the action image sequence, extracting two-dimensional joint points belonging to the same target human body in each frame of the action image sequence so as to reconstruct a human body skeleton of the target human body, and storing and sending the human body skeleton information of each frame to the three-dimensional action reconstruction module; and the three-dimensional action reconstruction module is used for reconstructing the three-dimensional action of the target human body by combining binocular stereo vision to restore the real position of the target human body in the three-dimensional space based on the human body skeleton information in each frame.
the three-dimensional human body motion reconstruction system provided by the application captures a motion image of a target human body through a corrected binocular vision-based binocular camera, performs two-dimensional human body joint point recognition on a two-dimensional image on the motion image, reconstructs a three-dimensional joint point of the target human body based on binocular stereo vision and the two-dimensional human body joint point, thereby reconstructing three-dimensional motion of the target human body, generates a depth map without complex graphic transformation, can avoid errors caused in a depth change process, does not mistakenly recognize objects around the target human body as a part of the target human body, reduces the requirements of an application scene, and solves the technical problems that the existing depth camera human body three-dimensional motion recognition method has high requirements on the application scene and has low recognition accuracy rate in the complex application scene.
drawings
in order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
fig. 1 is a schematic structural diagram of an embodiment of a three-dimensional human body motion reconstruction system according to an embodiment of the present disclosure;
Fig. 2 is a schematic flowchart of an embodiment of a three-dimensional human body motion reconstruction method according to an embodiment of the present disclosure;
Fig. 3 is a schematic structural diagram of an action training system provided in an embodiment of the present application.
Detailed Description
in order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a three-dimensional human body motion reconstruction system provided in an embodiment of the present application, where the three-dimensional human body motion reconstruction system provided in the embodiment of the present application includes:
and the camera calibration module 101 is configured to perform binocular camera calibration according to the collected calibration point data.
It should be noted that the binocular cameras used for acquiring the target human motion image in the three-dimensional human motion reconstruction system need to be calibrated, one or more binocular cameras may be used for acquiring calibration points, which may be checkerboard calibration points or two-dimensional code calibration points, and the calibration method of the calibration points is not limited here. The fixed point acquisition only needs to be gathered when the binocular camera is used for the first time, and fixed point acquisition is not required to be repeated subsequently, and the acquired fixed point data is stored in the storage module so as to be called again subsequently. And calibrating the binocular camera according to the calibration point data to obtain calibration camera parameters of the binocular camera.
and the camera stereo correction module 102 is used for carrying out stereo correction on the calibrated camera parameters after the binocular camera calibration based on a Bougue algorithm, so that the binocular camera can obtain parallel coplanar images of left and right visual angles of the binocular camera at the same moment.
It should be noted that after the binocular camera is calibrated to obtain the calibrated camera parameters of the camera, the stereo correction process is completed through a Bouguet algorithm (invokable by opencv), the correction parameter matrixes of the left camera and the right camera are calculated, distortion is removed, two planes of the binocular camera are completely aligned, and therefore parallel coplanar images of the left visual angle and the right visual angle of the binocular camera at the same moment are obtained.
and the action acquisition module 103 is used for acquiring an action image sequence of the target human body through the binocular camera and sending the action image sequence to the two-dimensional human body action recognition module 104.
It should be noted that the motion image sequence of the target human body is acquired by the calibrated binocular camera, and the motion image sequence is sent to the two-dimensional human body motion recognition module 104 for two-dimensional image processing.
The two-dimensional human body motion recognition module 104 is configured to recognize key limb joint parts of a human body in a motion image sequence, extract two-dimensional joint points belonging to the same target human body in each frame of the motion image sequence, so as to reconstruct a human body skeleton of the target human body, store human body skeleton information of each frame, and send the human body skeleton information to the three-dimensional motion reconstruction module 105.
It should be noted that, the joint points belonging to the same person in each frame of the motion image sequence are extracted to form a complete human body skeleton, and the skeleton information of each frame is cached in a database by using a json-format file and further provided to the three-dimensional motion reconstruction module 105 for use. In the embodiment of the present application, the two-dimensional human body motion recognition module 104 uses an open-source multi-person two-dimensional posture neural network openposition to realize two-dimensional human body motion recognition to reconstruct a human body skeleton of a target human body.
And the three-dimensional motion reconstruction module 105 is used for reconstructing the three-dimensional motion of the target human body by restoring the real position of the target human body in the three-dimensional space at the joint point based on the human body skeleton information in each frame and combining binocular stereo vision.
it should be noted that the motion image sequence of the left and right viewing angles of the binocular camera after coplanar alignment correction is input into the two-dimensional human body motion recognition module 104 to obtain skeleton information represented by two-dimensional joint point information, the human body skeleton information in each frame is restored by the three-dimensional motion reconstruction module 105 by combining a binocular stereo vision method to the real position of the target human body in the three-dimensional space of the joint point, and the three-dimensional joint point of the target human body is reconstructed, so that the three-dimensional motion of the target human body is reconstructed according to the three-dimensional joint point.
The three-dimensional human body action reconstruction system provided in the embodiment of the application captures the action image of a target human body through the corrected binocular vision-based binocular camera, performs two-dimensional human body joint point recognition on the two-dimensional image on the action image, reconstructs the three-dimensional joint point of the target human body based on binocular stereo vision and the two-dimensional human body joint point, and accordingly reconstructs the three-dimensional action of the target human body, generates a depth map without complex graphic transformation, can avoid errors caused in a depth change process, cannot mistakenly recognize objects around the target human body as a part of the target human body, reduces the requirements of an application scene, and solves the technical problems that the existing depth camera human body three-dimensional action recognition method has high requirements on the application scene and low recognition accuracy rate in the complex application scene.
As a further improvement of the three-dimensional human body motion reconstruction system in the embodiment of the present application, the camera calibration module in the embodiment of the present application is specifically configured to perform calibration preprocessing of the binocular camera according to the collected calibration point data, and perform calibration of the binocular camera based on an internal parameter matrix, an external parameter matrix, and a stereo correction parameter of the binocular camera obtained through the preprocessing.
It should be noted that, the method for obtaining the intrinsic parameter matrix and the extrinsic parameter matrix of the binocular camera may be described as follows:
Internal parameters: and (4) intrinsic parameters of the lens. Lens center position (Cx, Cy) and focal length magnitude fx, fy. Are expressed in terms of pixel length.
The internal reference matrix is represented as:
The basic principle is as follows:
s[μ,υ,1]=K[rrrt][X,Y,0,1]=K[rrt][Y,Y,1]
Wherein, K is an internal parameter matrix of the camera, [ Y, Y, 1] T is a homogeneous coordinate of a point on the template plane, [ mu, upsilon, 1] T is a homogeneous coordinate of a corresponding point projected to the image plane by the point on the template plane, [ r1r2r3] and T are respectively a rotation matrix and a translation vector of the camera coordinate system relative to the world coordinate system.
the transformation relationship between two-dimensional planes (a specific example may be a transformation relationship between a plane where a checkerboard calibration board is located and a camera imaging plane in reality), which can be solved by a rotation matrix r and a translation vector t, can be described by a homography matrix H:
H=[hhh]=γK[rrt]
depending on the nature of the rotation matrix, i.e., r1Tr2 ═ 0 and | r1| | | | | r2| | | | | 1, each image may obtain the following two fundamental constraints on the intra-pair parameter matrix:
hKKh=0,
hKKh=hKKh。
Because the binocular camera has 4 unknown internal parameters, when the number of the shot images is more than or equal to 3, the internal parameter matrix K of one camera of the binocular camera can be linearly and uniquely solved, and the calculation can be completed by utilizing the monocular camera calibration algorithm of opencv.
the number of images taken by the module during calibration is 25, so that the stability of the calibration precision can be greatly improved.
after binocular calibration is carried out on the binocular camera, the relative positions of the left camera and the right camera of the binocular camera need to be known, namely, external parameters are obtained
external parameters: the camera position parameters, the rigid transformation of the real world coordinate system in which the object is located to the camera coordinate system, can be expressed as a combination of the rotation matrix R and the translational vector t [ R t ].
Pr is the homogeneous coordinate of P in the coordinate system of the right camera, Pw is the homogeneous coordinate of P in the coordinate system of the world, and can be calculated by the following algorithm:
P=RP+T,
P=RP+T。
The two formulae can be related by the following formula:
P=R(P-T)
the rotation matrix R and the translation vector T can be obtained through the first three formulas:
R=R(R)
T=T-RT
The internal and external parameters of the binocular camera can be obtained, and the relative positions of the two cameras are unchanged as long as the binocular camera is not damaged, so that the parameters do not need to be calibrated again.
after the internal and external parameters of the camera are obtained, a back projection matrix Q of the binocular camera can be calculated through an opencv open source library.
At this point, after camera calibration and acquisition of parameters for stereo correction are completed, data are stored in the data storage module for subsequent use.
as a further improvement of the three-dimensional human body action reconstruction system in the embodiment of the present application, the camera stereo correction module in the embodiment of the present application is specifically configured to perform stereo correction on the internal parameter matrix and the external parameter matrix based on a Bougue algorithm according to a reference image sequence captured by a binocular camera, so that reference images of the reference image sequence captured by the binocular camera after the stereo correction are parallel and coplanar at the same time.
As a further improvement of the three-dimensional human body motion reconstruction system in the embodiment of the present application, the two-dimensional human body motion recognition module in the embodiment of the present application is specifically configured to: the method comprises the steps of identifying key human body joint parts in a motion image sequence based on a preset convolutional neural network, extracting 25 joint points which belong to the same target human body in each frame of the motion image sequence so as to reconstruct the human body skeleton of the target human body, and storing and sending human body skeleton information of each frame to a three-dimensional motion reconstruction module.
It should be noted that the preset convolutional neural network matches the key limb joint position of the body in the two-dimensional image with the corresponding individual by using a non-parameter characterization method Part Affinity Fields and reconstructs the human body joint skeleton. The main idea is to use the bottom-up analysis step of the greedy algorithm to achieve high accuracy and real-time, and the body part location and association are performed on two branches simultaneously. The positions of 25 joint points of a two-dimensional image human body can be identified, and the method has high precision and real-time speed.
The method for recognizing the two-dimensional human body posture by adopting the preset convolutional neural network comprises the following steps: inputting a two-dimensional human body image video sequence with the pixel scale of w multiplied by h, carrying out key point positioning on individuals appearing in each frame of the image video sequence, then carrying out prediction by utilizing a CNN neural network model with a double-branch multi-stage architecture, predicting a group of confidence maps S of human body parts and a group of two-dimensional limb vector fields J (each body part is provided with a corresponding confidence map, each human body limb is provided with a vector) by a first branch of the CNN, analyzing the confidence maps and the PAF through greedy reasoning, and outputting 2D skeleton key point information of each person on each frame of the image video sequence, wherein the total number of the 2D skeleton key point information is 25.
as a further improvement of the embodiment of the present application, the three-dimensional motion reconstruction module provided in the embodiment of the present application is specifically configured to restore the human skeleton information in each frame to the real position of the target human body in the three-dimensional space at the joint point based on the preset back projection matrix, and reconstruct the three-dimensional motion of the target human body.
it should be noted that, the motion image sequence of the left and right viewing angles of the binocular camera after coplanar alignment correction is input into the two-dimensional human body motion recognition module 104, so as to obtain pixel coordinates (x, y) of the same-name points of 25 human body joints, the parallax d of the same-name points can be calculated by implementing the obtained parameters, and then the coordinates of the same-name points of the two-dimensional human body joints imaged by the binocular camera at different viewing angles can be converted into the positions of the three-dimensional human body joint points through the back projection matrix Q and the matrix operation obtained in the two-dimensional human body motion recognition module. The three-dimensional motion reconstruction module 105 restores the human skeleton information in each frame to the real position of the target human body in the three-dimensional space of the joint point, and reconstructs the three-dimensional motion of the target human body.
The real spatial coordinate position of a certain joint point is (X/W, Y/W, Z/W).
The back projection matrix Q is:
and outputting the obtained human body joint point information of each frame of the image video sequence to a dance evaluation module for the next evaluation work.
For easy understanding, please refer to fig. 2, an embodiment of a three-dimensional human motion reconstruction method is further provided in the present application, including:
and S1, calibrating the camera of the binocular camera based on the collected calibration point data to obtain calibration camera parameters, wherein the calibration camera parameters comprise an internal parameter matrix, an external parameter matrix and three-dimensional correction parameters.
And step S2, performing stereo correction on the binocular camera based on the Bougue algorithm and the calibrated camera parameters, so that the binocular camera can obtain parallel coplanar images of left and right visual angles of the binocular camera at the same time.
And step S3, acquiring a motion image sequence of the target human body through the binocular camera.
And step S4, identifying key limb joint parts of the human body in the action image sequence, and extracting two-dimensional joint points belonging to the same target human body in each frame of the action image sequence so as to reconstruct the human body skeleton of the target human body and obtain the human body skeleton information of each frame.
and step S5, restoring the real position of the target human body in the three-dimensional space by combining binocular stereo vision based on the human body skeleton information in each frame, and reconstructing the three-dimensional motion of the target human body.
For easy understanding, please refer to fig. 3, an embodiment of a motion training system is provided in the present application, which includes a camera calibration module 101, a camera stereo correction module 102, a motion acquisition module 103, a two-dimensional human motion recognition module 104, and a three-dimensional motion reconstruction module 105 in the foregoing embodiments, and further includes a motion evaluation module 106;
And the action evaluation module 106 is used for performing difference comparison on the three-dimensional action of the target human body acquired from the three-dimensional action reconstruction module and a preset standard action and outputting an action evaluation result corresponding to the difference comparison result.
specifically, the action evaluation module 106 is configured to:
Performing joint point combination angle similarity judgment, average curvature comparison of joint point combination motion tracks and human body joint point combination motion quantity comparison on the three-dimensional motion of the target human body acquired from the three-dimensional motion reconstruction module and a preset standard motion;
Acquiring a first action evaluation result corresponding to the result of joint point combination angle similarity judgment, a second action evaluation result corresponding to the result of average curvature comparison of joint point combination motion tracks and a third action evaluation result corresponding to the result of human body joint point combination motion quantity comparison;
And performing weighting processing on the first action evaluation result, the second action evaluation result and the third action evaluation result, and outputting the action evaluation result obtained after the weighting processing.
After obtaining the three-dimensional dance movement of the human body of the user, the difference evaluation is performed on the movement and the standard movement, and the basis of the difference evaluation is divided into the following parts:
(1) And judging the similarity of the joint point combination angles. The evaluation method for angle similarity analysis can derive the cosine of the corresponding joint angle and then convert the cosine into the angle theta by using the Euclidean dot product formula (A and B are two vectors formed by three joint points, namely the three joint points are combined into a group of joint point combination), so that a user can visually see the difference conveniently
And the average difference in a set of joint point combinations in a set of consecutive movements can be expressed as:
compared with the standard motion data, the corresponding joint angle is generally considered to be good when the angle is less than 10 degrees, the angle is generally considered to be greater than 10 degrees and less than 20 degrees, some errors are caused, and motion errors are judged when the angle exceeds 20 degrees.
(2) And comparing the average curvatures of the joint point combined motion tracks. The human body movement is a dynamic movement, and the integral movement degree of the joints can influence the aesthetic property of the dance. And judging the similarity of the motion tracks by calculating the average curvature of the combined motion track of the joint points of all parts of the human body and comparing the average curvature with the average curvature of the combined motion track of the standard action joint points. Firstly, a curve is fitted to the motion track of the joint points, the space distance of the movement of the joint points between two frames is defined as deltas, the corresponding rotating angle is delta alpha, the average curvature of the motion of each joint point is represented, and the average curvature of the combination of the joint points can be represented as
the similarity of the combined motion tracks of the two corresponding joint points can be compared through the judged sizes.
(3) And (4) comparing the combined motion amount of the human body joint points. The target human body motion (such as dance motion) is the motion of the whole human body, and it is necessary to judge the swing amplitude of the whole human body in addition to the evaluation of the condition of a single joint. Because each human body is different and is a unified standard, the distance between two shoulders of the human body is firstly adopted to carry out normalization processing on each node of the human body, then the front and back movement amounts of the normalized joints are added to obtain the human body movement amount of the human body at a certain moment, and then the human body movement amount is compared with the standard action.
Firstly, because human bodies have different degrees of difference, the three-dimensional space coordinate distance of the shoulders of the human bodies is adopted to carry out normalized data processing on the coordinates of each joint point of the human bodies
x*=(x-x)/(x-x)
y*=(y-y)/(y-y)
z*=(z-z)/(z-z)
Secondly, after normalization, the sum of the normalized relative coordinate displacement of each joint point before and after a certain corresponding time interval calculation interval of the human body is Wp, and the total displacement W of the joint point combination formed by the joint points is taken as the movement amount of the joint point combination
Comparing the human body exercise amount at a certain moment with the standard action, and judging the difference between the two actions through delta W.
The resulting values of the three parts are weighted, but at the same time the quality level determined for each body part depends on the difficulty of the exercise. And setting a percentage range suitable for the quality grade result to obtain a final dance action score, and visualizing the three parts of data on a display screen through a central processing unit and providing the data to a user.
As a further improvement, the action training system in the embodiment of the present application further includes: a music tempo fitness module 107;
and the music rhythm integrating module 107 is used for extracting the music characteristics of the music on broadcasting based on an audio extraction algorithm, judging the matching degree of the human body actions of the target human body in continuous frames and the preset standard actions in the series of actions matching the beat by taking the beat as a unit, and outputting a beat matching result.
It should be noted that, the music features of the dance music are extracted by using an audio extraction algorithm, and whether the motion of the user in the continuous frames is consistent with the series of motions of the standard motion in the beat is determined by taking the beat as a unit, so as to determine whether the beat is correct. If the first half part is divided into the partial shot scores and recorded, the system resets the beat alignment to match with the actual action of the user and provide accurate action data comparison for the action evaluation module 106. The beat scores and the training reports are generated by combining the action evaluation module 106 and the music rhythm integrating module 107 to be visually displayed to the user.
a data storage module 109 can also be arranged, and in the data storage module 109, a user can select to store the scoring and training report and the training recording and comparison video data to a hard disk, or upload the scoring and training report and the training recording and comparison video data to a cloud platform through a network partially so that a recommendation algorithm can be matched with the personal dance variety and training pattern of the user. Meanwhile, the data storage module 109 also stores camera parameters for human body three-dimensional motion reconstruction.
As a further improvement, the action training system in the embodiment of the present application further includes: a health daemon module 108;
the health guarding module 108 is used for calculating the physical quality degree, the proper dancing style and the current training exercise amount of the user through a cloud platform algorithm according to the user information of the target human body, recommending the most proper action set of the user according to the calculation result in a personalized high-quality mode, avoiding the action that the training intensity of the user is too weak or too high, recording the training time and the training intensity of the user every time, and prompting the user to have a rest through voice broadcasting and/or displaying a pop window mode.
It should be noted that the mode switching module 110 may also be configured to: the module provides mode selection for dance training. Three functions can be selected under the module, namely a learning module, a grading mode and a special training mode. Wherein, the learning module and the scoring mode can be carried out by a plurality of persons at the same time, and the special training mode is defaulted to be a single mode
1) The learning mode can be used for downloading standard dance videos recorded by professional dancers in advance from the cloud platform to perform simulated learning, the standard videos can provide a front version and a mirror inversion version for users to select and display to a display, and meanwhile, the system can divide learning steps and mark difficult and important actions to the users according to differences of standard actions or music rhythms so as to reinforce the learning.
2) The scoring mode calls the action evaluation module 106, which evaluates the fitness of the user action and the music beat and visually displays the fitness to the user in a text special effect mode.
3) In the special training mode, the cloud platform judges the action defect of the user according to the series course learning grade storage data of the user, and provides corresponding local special training according to the defect.
The action training system that provides in the embodiment of this application, based on binocular stereovision principle, the binocular camera need not to add all the other expensive sensors, only need can reach higher precision after the camera is markd, be close to Kinect depth camera's recognition accuracy basically, it is lower to realize the cost, do not worry camera shake simultaneously, the problem of complicated application scene, the user can be according to the position of demand dynamic change camera in order to obtain better human observation angle, even can not influence its precision in mixed and disorderly room based on parallax triangulation's binocular formation of image, adaptability is stronger. Meanwhile, the dancer does not need to wear any sensing equipment or special clothes for collecting human body actions, so that the dancer can more conveniently stretch and dance postures, and the weight is reduced. Meanwhile, the method can also be expanded to adapt to the recognition of multi-person dance actions.
the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (10)

1. a three-dimensional human motion reconstruction system, comprising:
The camera calibration module is used for calibrating the binocular camera according to the collected calibration point data;
The camera stereo correction module is used for carrying out stereo correction on the calibrated camera parameters after the binocular camera calibration based on a Bougue algorithm so that the binocular camera can obtain parallel coplanar images of left and right visual angles of the binocular camera at the same moment;
the binocular camera is used for acquiring a target human body action image sequence and sending the action image sequence to the two-dimensional human body action recognition module;
the two-dimensional human body motion recognition module is used for recognizing key human body joint parts in the motion image sequence, extracting two-dimensional joint points which belong to the same target human body in each frame of the motion image sequence so as to reconstruct a human body skeleton of the target human body, and storing and sending the human body skeleton information of each frame to the three-dimensional motion reconstruction module;
And the three-dimensional motion reconstruction module is used for restoring the real position of the target human body in a three-dimensional space by combining binocular stereoscopic vision based on the human body skeleton information in each frame, and reconstructing the three-dimensional motion of the target human body.
2. the three-dimensional human motion reconstruction system of claim 1, wherein the camera calibration module is specifically configured to:
And carrying out calibration pretreatment on the binocular camera according to the collected calibration point data, and calibrating the binocular camera based on the obtained internal parameter matrix, external parameter matrix and three-dimensional correction parameters of the binocular camera through the pretreatment.
3. The three-dimensional human motion reconstruction system of claim 2, wherein the camera stereo correction module is specifically configured to:
And performing stereo correction on the internal parameter matrix and the external parameter matrix based on a Bougue algorithm according to the reference image sequence shot by the binocular camera, so that reference images of the reference image sequence shot by the binocular camera after the stereo correction are parallel and coplanar at the same moment.
4. the three-dimensional human motion reconstruction system of claim 1, wherein the two-dimensional human motion recognition module is specifically configured to:
and identifying key human body joint parts in the action image sequence based on a preset convolutional neural network, extracting 25 joint points belonging to the same target human body in each frame of the action image sequence so as to reconstruct the human body skeleton of the target human body, and storing and sending the human body skeleton information of each frame to a three-dimensional action reconstruction module.
5. The three-dimensional human motion reconstruction system of claim 1, wherein the three-dimensional motion reconstruction module is specifically configured to:
Restoring the human body skeleton information in each frame to the real position of the target human body in the three-dimensional space based on a preset back projection matrix, and reconstructing the three-dimensional action of the target human body.
6. a three-dimensional human body motion reconstruction method is characterized by comprising the following steps:
Performing camera calibration on a binocular camera based on collected calibration point data to obtain calibration camera parameters, wherein the calibration camera parameters comprise an internal parameter matrix, an external parameter matrix and a three-dimensional correction parameter;
Performing stereo correction on the binocular camera based on a Bougue algorithm and the calibrated camera parameters, so that the binocular camera can obtain parallel coplanar images of left and right visual angles of the binocular camera at the same moment;
Acquiring a motion image sequence of a target human body through the binocular camera;
Identifying key body joint parts of the human body in the action image sequence, and extracting two-dimensional joint points which belong to the same target human body in each frame of the action image sequence so as to reconstruct the human body skeleton of the target human body and obtain the human body skeleton information of each frame;
and restoring the real position of the target human body in the three-dimensional space by combining binocular stereo vision based on the human body skeleton information in each frame, and reconstructing the three-dimensional action of the target human body.
7. a motion training system comprising the three-dimensional human motion reconstruction system of any one of claims 1-5, further comprising a motion assessment module;
and the action evaluation module is used for comparing the three-dimensional action of the target human body acquired from the three-dimensional action reconstruction module with a preset standard action in a difference mode and outputting an action evaluation result corresponding to the difference comparison result.
8. The action training system of claim 7, wherein the action assessment module is specifically configured to:
Performing joint point combination angle similarity judgment, average curvature comparison of joint point combination motion tracks and human body joint point combination motion quantity comparison on the three-dimensional motion of the target human body acquired from the three-dimensional motion reconstruction module and a preset standard motion;
Acquiring a first action evaluation result corresponding to the result of the joint point combination angle similarity judgment, a second action evaluation result corresponding to the result of the average curvature comparison of the joint point combination motion trail and a third action evaluation result corresponding to the result of the human body joint point combination motion quantity comparison;
And performing weighting processing on the first action evaluation result, the second action evaluation result and the third action evaluation result, and outputting action evaluation results obtained after weighting processing.
9. The action training system of claim 8, further comprising: a music rhythm integrating degree module;
the music rhythm integrating module is used for extracting the music characteristics of the music on broadcasting based on an audio extraction algorithm, judging the matching degree of the human body actions of the target human body in continuous frames and the preset standard actions in the series of actions matching the beat by taking the beat as a unit, and outputting a beat matching result.
10. the action training system of claim 8, further comprising: a health guarding module;
the health guarding module is used for calculating the physical quality degree, the proper dancing type and the current training exercise amount of the user through a cloud platform algorithm according to the user information of the target human body, recommending the most proper action set of the user according to the calculation result in a personalized high-quality mode, avoiding the action with too weak or too high training intensity of the user, recording the training time and the training intensity of the user every time, and prompting the user to have a rest through a voice broadcasting and/or display popup window mode.
CN201910843462.2A 2019-09-06 2019-09-06 Three-dimensional human body action reconstruction system, method and action training system Pending CN110544301A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910843462.2A CN110544301A (en) 2019-09-06 2019-09-06 Three-dimensional human body action reconstruction system, method and action training system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910843462.2A CN110544301A (en) 2019-09-06 2019-09-06 Three-dimensional human body action reconstruction system, method and action training system

Publications (1)

Publication Number Publication Date
CN110544301A true CN110544301A (en) 2019-12-06

Family

ID=68712744

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910843462.2A Pending CN110544301A (en) 2019-09-06 2019-09-06 Three-dimensional human body action reconstruction system, method and action training system

Country Status (1)

Country Link
CN (1) CN110544301A (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105486A (en) * 2019-12-07 2020-05-05 东南大学 Multi-view-angle dynamic three-dimensional reconstruction method for mice
CN111310859A (en) * 2020-03-26 2020-06-19 上海景和国际展览有限公司 Rapid artificial intelligence data training system used in multimedia display
CN111444890A (en) * 2020-04-30 2020-07-24 汕头市同行网络科技有限公司 Sports data analysis system and method based on machine learning
CN111754620A (en) * 2020-06-29 2020-10-09 武汉市东旅科技有限公司 Human body space motion conversion method, conversion device, electronic equipment and storage medium
CN111798486A (en) * 2020-06-16 2020-10-20 浙江大学 Multi-view human motion capture method based on human motion prediction
CN111857482A (en) * 2020-07-24 2020-10-30 北京字节跳动网络技术有限公司 Interaction method, device, equipment and readable medium
CN111881886A (en) * 2020-08-21 2020-11-03 董秀园 Intelligent seat control method and device based on posture recognition
CN111881888A (en) * 2020-08-21 2020-11-03 董秀园 Intelligent table control method and device based on attitude identification
CN111881887A (en) * 2020-08-21 2020-11-03 董秀园 Multi-camera-based motion attitude monitoring and guiding method and device
CN112200041A (en) * 2020-09-29 2021-01-08 Oppo(重庆)智能科技有限公司 Video motion recognition method and device, storage medium and electronic equipment
CN112270815A (en) * 2020-12-23 2021-01-26 四川写正智能科技有限公司 Read-write gesture recognition method and system based on smart watch
CN112329723A (en) * 2020-11-27 2021-02-05 北京邮电大学 Binocular camera-based multi-person human body 3D skeleton key point positioning method
CN112435731A (en) * 2020-12-16 2021-03-02 成都翡铭科技有限公司 Method for judging whether real-time posture meets preset rules
CN112926475A (en) * 2021-03-08 2021-06-08 电子科技大学 Human body three-dimensional key point extraction method
CN113850865A (en) * 2021-09-26 2021-12-28 北京欧比邻科技有限公司 Human body posture positioning method and system based on binocular vision and storage medium
JP2023512282A (en) * 2020-11-12 2023-03-24 深▲セン▼市洲明科技股▲フン▼有限公司 MULTI-PERSON 3D MOTION CAPTURE METHOD, STORAGE MEDIUM AND ELECTRONIC DEVICE
CN115880774A (en) * 2022-12-01 2023-03-31 湖南工商大学 Body-building action recognition method and device based on human body posture estimation and related equipment
WO2023082306A1 (en) * 2021-11-12 2023-05-19 苏州瑞派宁科技有限公司 Image processing method and apparatus, and electronic device and computer-readable storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910222A (en) * 2017-02-15 2017-06-30 中国科学院半导体研究所 Face three-dimensional rebuilding method based on binocular stereo vision
CN108021889A (en) * 2017-12-05 2018-05-11 重庆邮电大学 A kind of binary channels infrared behavior recognition methods based on posture shape and movable information
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN108597578A (en) * 2018-04-27 2018-09-28 广东省智能制造研究所 A kind of human motion appraisal procedure based on two-dimensional framework sequence
CN108985223A (en) * 2018-07-12 2018-12-11 天津艾思科尔科技有限公司 A kind of human motion recognition method
CN110020633A (en) * 2019-04-12 2019-07-16 腾讯科技(深圳)有限公司 Training method, image-recognizing method and the device of gesture recognition model
CN110059661A (en) * 2019-04-26 2019-07-26 腾讯科技(深圳)有限公司 Action identification method, man-machine interaction method, device and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106910222A (en) * 2017-02-15 2017-06-30 中国科学院半导体研究所 Face three-dimensional rebuilding method based on binocular stereo vision
CN108021889A (en) * 2017-12-05 2018-05-11 重庆邮电大学 A kind of binary channels infrared behavior recognition methods based on posture shape and movable information
CN108053469A (en) * 2017-12-26 2018-05-18 清华大学 Complicated dynamic scene human body three-dimensional method for reconstructing and device under various visual angles camera
CN108597578A (en) * 2018-04-27 2018-09-28 广东省智能制造研究所 A kind of human motion appraisal procedure based on two-dimensional framework sequence
CN108985223A (en) * 2018-07-12 2018-12-11 天津艾思科尔科技有限公司 A kind of human motion recognition method
CN110020633A (en) * 2019-04-12 2019-07-16 腾讯科技(深圳)有限公司 Training method, image-recognizing method and the device of gesture recognition model
CN110059661A (en) * 2019-04-26 2019-07-26 腾讯科技(深圳)有限公司 Action identification method, man-machine interaction method, device and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
陈学梅: "基于人体三维姿态的动作评价系统", 《中国优秀硕士学位论文全文数据库信息科技辑》 *
陈学梅: "基于人体三维姿态的动作评价系统", 《中国优秀硕士学位论文全文数据库信息科技辑》, 15 August 2018 (2018-08-15), pages 1 - 6 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105486A (en) * 2019-12-07 2020-05-05 东南大学 Multi-view-angle dynamic three-dimensional reconstruction method for mice
CN111310859A (en) * 2020-03-26 2020-06-19 上海景和国际展览有限公司 Rapid artificial intelligence data training system used in multimedia display
CN111444890A (en) * 2020-04-30 2020-07-24 汕头市同行网络科技有限公司 Sports data analysis system and method based on machine learning
CN111798486A (en) * 2020-06-16 2020-10-20 浙江大学 Multi-view human motion capture method based on human motion prediction
CN111798486B (en) * 2020-06-16 2022-05-17 浙江大学 Multi-view human motion capture method based on human motion prediction
CN111754620B (en) * 2020-06-29 2024-04-26 武汉市东旅科技有限公司 Human body space motion conversion method, conversion device, electronic equipment and storage medium
CN111754620A (en) * 2020-06-29 2020-10-09 武汉市东旅科技有限公司 Human body space motion conversion method, conversion device, electronic equipment and storage medium
CN111857482A (en) * 2020-07-24 2020-10-30 北京字节跳动网络技术有限公司 Interaction method, device, equipment and readable medium
CN111881886A (en) * 2020-08-21 2020-11-03 董秀园 Intelligent seat control method and device based on posture recognition
CN111881887A (en) * 2020-08-21 2020-11-03 董秀园 Multi-camera-based motion attitude monitoring and guiding method and device
CN111881888A (en) * 2020-08-21 2020-11-03 董秀园 Intelligent table control method and device based on attitude identification
CN112200041A (en) * 2020-09-29 2021-01-08 Oppo(重庆)智能科技有限公司 Video motion recognition method and device, storage medium and electronic equipment
CN112200041B (en) * 2020-09-29 2022-08-02 Oppo(重庆)智能科技有限公司 Video motion recognition method and device, storage medium and electronic equipment
JP2023512282A (en) * 2020-11-12 2023-03-24 深▲セン▼市洲明科技股▲フン▼有限公司 MULTI-PERSON 3D MOTION CAPTURE METHOD, STORAGE MEDIUM AND ELECTRONIC DEVICE
JP7480312B2 (en) 2020-11-12 2024-05-09 深▲セン▼市洲明科技股▲フン▼有限公司 Multi-person three-dimensional motion capture method, storage medium, and electronic device
CN112329723A (en) * 2020-11-27 2021-02-05 北京邮电大学 Binocular camera-based multi-person human body 3D skeleton key point positioning method
CN112435731A (en) * 2020-12-16 2021-03-02 成都翡铭科技有限公司 Method for judging whether real-time posture meets preset rules
CN112435731B (en) * 2020-12-16 2024-03-19 成都翡铭科技有限公司 Method for judging whether real-time gesture meets preset rules
CN112270815A (en) * 2020-12-23 2021-01-26 四川写正智能科技有限公司 Read-write gesture recognition method and system based on smart watch
CN112926475A (en) * 2021-03-08 2021-06-08 电子科技大学 Human body three-dimensional key point extraction method
CN112926475B (en) * 2021-03-08 2022-10-21 电子科技大学 Human body three-dimensional key point extraction method
CN113850865A (en) * 2021-09-26 2021-12-28 北京欧比邻科技有限公司 Human body posture positioning method and system based on binocular vision and storage medium
WO2023082306A1 (en) * 2021-11-12 2023-05-19 苏州瑞派宁科技有限公司 Image processing method and apparatus, and electronic device and computer-readable storage medium
CN115880774A (en) * 2022-12-01 2023-03-31 湖南工商大学 Body-building action recognition method and device based on human body posture estimation and related equipment

Similar Documents

Publication Publication Date Title
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
Rogez et al. Mocap-guided data augmentation for 3d pose estimation in the wild
CN110544302A (en) Human body action reconstruction system and method based on multi-view vision and action training system
KR101323966B1 (en) A system and method for 3D space-dimension based image processing
CN108717531B (en) Human body posture estimation method based on Faster R-CNN
US9330470B2 (en) Method and system for modeling subjects from a depth map
CN111881887A (en) Multi-camera-based motion attitude monitoring and guiding method and device
CN110448870B (en) Human body posture training method
CN113706699B (en) Data processing method and device, electronic equipment and computer readable storage medium
JP5795250B2 (en) Subject posture estimation device and video drawing device
CN105426827A (en) Living body verification method, device and system
WO2021238171A1 (en) Image registration method and related model training method, device and apparatus
JP2019096113A (en) Processing device, method and program relating to keypoint data
CN110717391A (en) Height measuring method, system, device and medium based on video image
CN110751100A (en) Auxiliary training method and system for stadium
CN110298279A (en) A kind of limb rehabilitation training householder method and system, medium, equipment
CN115035546B (en) Three-dimensional human body posture detection method and device and electronic equipment
CN111435550A (en) Image processing method and apparatus, image device, and storage medium
JP5503510B2 (en) Posture estimation apparatus and posture estimation program
Zou et al. Automatic reconstruction of 3D human motion pose from uncalibrated monocular video sequences based on markerless human motion tracking
CN114373044A (en) Method, device, computing equipment and storage medium for generating three-dimensional face model
JP6799468B2 (en) Image processing equipment, image processing methods and computer programs
CN117711066A (en) Three-dimensional human body posture estimation method, device, equipment and medium
CN111783497A (en) Method, device and computer-readable storage medium for determining characteristics of target in video
CN113947811A (en) Taijiquan action correction method and system based on generation of confrontation network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20191206