CN113158459A - Human body posture estimation method based on visual and inertial information fusion - Google Patents
Human body posture estimation method based on visual and inertial information fusion Download PDFInfo
- Publication number
- CN113158459A CN113158459A CN202110422431.7A CN202110422431A CN113158459A CN 113158459 A CN113158459 A CN 113158459A CN 202110422431 A CN202110422431 A CN 202110422431A CN 113158459 A CN113158459 A CN 113158459A
- Authority
- CN
- China
- Prior art keywords
- coordinate system
- human body
- inertial
- visual
- skeleton
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000000007 visual effect Effects 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 33
- 230000004927 fusion Effects 0.000 title claims abstract description 20
- 238000005457 optimization Methods 0.000 claims abstract description 22
- 239000011159 matrix material Substances 0.000 claims description 35
- 230000001133 acceleration Effects 0.000 claims description 29
- 210000000988 bone and bone Anatomy 0.000 claims description 26
- 238000005259 measurement Methods 0.000 claims description 19
- 238000006073 displacement reaction Methods 0.000 claims description 12
- 210000000245 forearm Anatomy 0.000 claims description 8
- 230000009466 transformation Effects 0.000 claims description 7
- 230000002159 abnormal effect Effects 0.000 claims description 4
- 210000004197 pelvis Anatomy 0.000 claims description 4
- 238000009795 derivation Methods 0.000 claims description 2
- 239000000284 extract Substances 0.000 claims description 2
- 230000005484 gravity Effects 0.000 claims description 2
- 238000003064 k means clustering Methods 0.000 claims description 2
- 238000007619 statistical method Methods 0.000 claims description 2
- 238000013519 translation Methods 0.000 claims description 2
- 230000007547 defect Effects 0.000 abstract description 5
- 230000036544 posture Effects 0.000 description 35
- 238000005516 engineering process Methods 0.000 description 6
- 238000012549 training Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000012544 monitoring process Methods 0.000 description 2
- 238000012271 agricultural production Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 210000003049 pelvic bone Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F30/00—Computer-aided design [CAD]
- G06F30/20—Design optimisation, verification or simulation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2111/00—Details relating to CAD techniques
- G06F2111/04—Constraint-based CAD
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Computer Hardware Design (AREA)
- Geometry (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
A human body posture estimation method based on visual and inertial information fusion aims at the defect that a human body posture estimation method based on a 3D visual sensor cannot provide three-degree-of-freedom rotation information, the visual information, the inertial information and human body posture prior information are fused in a self-adaptive mode by utilizing the complementarity of the visual information and the inertial information and adopting a nonlinear optimization method, the rotation angle of a human body skeleton node and the global position of a root skeleton node at each moment are obtained, and real-time human body posture estimation is completed. The invention effectively improves the accuracy and robustness of human body posture estimation, and makes up the defects that a visual sensor is easy to be shielded and inertial data accumulate errors along with time.
Description
Technical Field
The invention belongs to the field of human body posture estimation, and particularly relates to a human body posture estimation method based on visual and inertial information fusion.
Background
The human body posture estimation technology has important application value, and with the development of technologies such as a visual sensor, an inertial measurement unit, artificial intelligence and the like, the human body posture estimation technology is gradually applied to the fields of human-computer cooperation, video monitoring, movie and television production, industrial and agricultural production and the like, for example, the human body posture estimation technology is used for guaranteeing the safety problem of workers in the human-computer cooperation process, and is used for recording and analyzing the behavior of people in a monitoring picture and the like.
The 3D human body posture estimation technology is mature, and with the development of the fields of behavior recognition, auxiliary training, man-machine cooperation and the like, people need 6D human body posture estimation information to develop and apply, for example, in dance auxiliary training, the 6D human body posture estimation comprises joint rotation information, captured dance action details are richer, and trainees have better training effects. In daily production and life, the human body posture estimation method based on 3D vision is most common and practical, the method can be used for accurately extracting human body skeleton joint points to obtain 3D human body posture information, but when a human body is shielded by self or a camera is partially shielded, the data reliability is reduced. The inertial measurement unit may provide spatial rotation information whose output is stable, but the error in the rotation information accumulates over time. The human body can be subjected to 6D posture estimation by utilizing the complementarity of the visual information and the inertial information, the three-degree-of-freedom displacement information output by the visual information and the three-degree-of-freedom rotation information output by the inertial information are simply combined, and the obtained human body posture estimation system is poor in robustness and low in precision. At present, no technology exists for solving the 6D human body posture estimation problem by combining visual and inertial information in a robust and real-time manner.
Disclosure of Invention
In order to overcome the defect that a human body posture estimation method based on a 3D vision sensor cannot provide three-degree-of-freedom rotation information, the invention provides a human body posture estimation method based on vision and inertia information fusion.
The technical scheme adopted by the invention comprises the following steps:
a human body posture estimation method based on visual and inertial information fusion comprises the following steps:
step 1) establishing a kinematic model of each skeletal node of a human body, determining an optimized variable theta, and determining a homogeneous transformation matrix between a camera coordinate system c and a global coordinate system gRotation matrix between inertial coordinate system n and global coordinate system gInertial sensor i and corresponding bone coordinate system biA matrix of displacements betweenAnd a rotation matrix
Step 2) setting the output frequency of vision and inertia to be consistent, and restricting the rotation of the inertial sensor by ER(theta), acceleration constraint EA(theta), visual sensor position constraint EP(theta) and body pose prior constraints Eprior(theta) constructing an optimization problem for the optimization items, and setting the weight of each optimization item;
step 3) reading the position measurement value of the vision sensor at each momentAnd rotation measurement value R of inertial sensoriAnd acceleration measurement aiCalculating the sensor measurement value of each optimized item after the unified coordinate systemAnd the estimated value
Step 4) solving a nonlinear least square optimization problem, wherein the optimal solution theta at each moment is the optimal rotation angle of each skeleton node of the human body at the current momentDegree and root skeletal node n1Obtaining the estimation of the human body posture at the current moment according to the established human body skeleton node kinematics model;
and (5) repeatedly executing the steps 3) and 4) to complete the state estimation of each joint point of the human body at each moment, and obtaining the real-time human body posture estimation based on the fusion of visual and inertial information.
Further, in the step 1), the camera coordinate system c represents a coordinate system of the depth camera, the inertial coordinate system n represents a unified coordinate system of all the inertial sensors after calibration, and the global coordinate system g is aligned with the initial coordinate system of the bone node.
In the step 2), the rotation is restrictedR(θ) established by the difference between the measured and estimated values of each IMU rotation matrix; the acceleration constraint EA(θ) established by minimizing a difference between the measured and estimated values of acceleration of each IMU; the position constraint EP(θ) established by minimizing the difference between the measured and estimated values of the global position of each bone node; the human body posture prior constraint Eprior(θ) is established by the existing body pose estimation dataset.
In the step 3), theIs the position information of each joint point of the human body, R, read from a vision sensoriIs the rotation information read from an inertial gyroscope, aiIs the acceleration information read from an inertial accelerometer.
In the step 4), the root skeleton node n1Is positioned at the pelvic joint point of the human body.
The invention has the following beneficial effects: aiming at the defect that the human body posture estimation method based on the 3D visual sensor lacks three-degree-of-freedom rotation information output, the human body posture estimation method based on the visual and inertial information fusion adopts a nonlinear optimization method to adaptively fuse the visual information, the inertial information and the human body posture prior information to obtain 6D human body posture estimation, improves the precision and the robustness of the human body posture estimation, and makes up the defects that the visual sensor is easy to be shielded and the inertial data accumulates errors along with time.
Drawings
FIG. 1 is a flow chart of a human body posture estimation method based on visual and inertial information fusion.
Fig. 2 is a schematic view of a position where an upper body bone node and an IMU are worn on a human body.
Fig. 3 is a schematic view of the placement of the visual sensors.
FIG. 4 is a flow chart of a human body posture estimation algorithm with fusion of visual and inertial information.
Detailed Description
In order to make the technical scheme and the design idea of the invention clearer, the posture estimation object selects the upper half of the human body, two visual sensors and five inertial sensors are adopted, and the invention is further described by combining the attached drawings.
Referring to fig. 1,2,3 and 4, a human body posture estimation method based on fusion of visual and inertial information includes the following steps:
step 1) establishing a kinematic model of each skeletal node of a human body, determining an optimized variable theta, and determining a homogeneous transformation matrix between a camera coordinate system c and a global coordinate system gRotation matrix between inertial coordinate system n and global coordinate system gInertial sensor i and corresponding bone coordinate system biA matrix of displacements betweenAnd a rotation matrixThe process is as follows:
1.1) human skeleton is defined as interconnected rigid bodies, the initial coordinate system B of the skeleton nodes and the globalThe coordinate system g is aligned, defining the upper half skeleton number nb13, as shown in fig. 2, the bone nodes are the left hand, right hand, left forearm, right forearm, left upper arm, right upper arm, left shoulder, right shoulder, spine 1-4, and pelvis, respectively, wherein the pelvis is considered as root bone node n1Sub-skeleton node nb(b ≧ 2) rotation matrices R all having a relative relation to its parent nodebAnd a relatively constant displacement tbEach bone has 3 rotational degrees of freedom, and the root node has a global displacement (x)1,y1,z1) By 42(d ═ 3+3 xn)b42) freedom degrees to express the motion of the whole upper body of the human body, 42 variables are recorded as a 42-dimensional vector theta which is used as an optimization variable of an optimization problem, and a homogeneous transformation matrix of each rigid skeleton under a global coordinate system is obtained by derivation through a forward kinematics formula
Wherein p (b) is the set of total bones;
1.2) as shown in fig. 3, two vision sensors are respectively placed in front of a tester, the vision sensors are 2 meters away from the tester L, and a translation matrix of the two cameras to a global coordinate system g is obtained by using a Zhang Yong camera calibration methodAnd a rotation matrixFurther determining homogeneous transformation matrix of camera coordinate system c and global coordinate system g
1.3) placing the inertial sensor IMU at the global coordinate system g such that the inertial sensor coordinate system n is aligned with the global coordinate system gObtaining the output value of the inertial sensor at the moment, namely the rotation matrix between the coordinate system n of the inertial sensor and the global coordinate system gRepeating the above operations to obtain the i (i is 1,2,3,4,5) th inertial sensor coordinate system niA rotation matrix with the global coordinate system g
1.4) the IMU is worn at the corresponding skeletal points of the left hand, right hand, left forearm, right forearm, pelvic bone, as shown in FIG. 2iWith a corresponding skeleton coordinate system biThere is no displacement therebetween, i.e.At an initial time the tester makes a "T-pos" calibration pose, at which time the IMU is definediMeasured value of (A) is Ri_initialThen IMUiWith a corresponding skeleton coordinate system biA rotation matrix ofExpressed as:
step 2) setting the output frequency of vision and inertia to be 30HZ, and restricting the rotation of the inertial sensor to be ER(theta), acceleration constraint EA(theta), visual sensor position constraint EP(theta) and body pose prior constraints Eprior(theta) constructing an optimization problem for the optimization items, and setting the weight of each optimization item; the process is as follows:
2.1)IMUithe difference between the measured and estimated values of the rotation matrix of the corresponding bone node in the global coordinate system serves as the rotation term constraint of the IMU. The rotation matrix measurements for the corresponding bone nodes are expressed as:
wherein R isiIs an IMUiIs measured. The rotation matrix estimate values for the corresponding bone nodes are expressed as:
wherein, P (b)i) Is a skeleton biA collection of all parents.
In summary, the energy function of the rotation term is defined as:
wherein ψ (-) extracts the vector part, λ, of the rotation matrix quaternion expression methodRWeight of the energy function of the rotation term, pR(. cndot.) represents a loss function.
2.2) minimizing IMUiAcceleration measurement aiAnd the error between the estimated value is used as an acceleration constraint term of the IMU. Acceleration estimation valueExpressed as:
wherein,the left side (t-1) of equation (6) indicates that the acceleration constraint for the previous time instant is used at the current time instant. Acceleration measurements in a global coordinate systemAnd calculating the rotation information and the acceleration measured value of the previous frame. Acceleration measurementValue ofExpressed as:
wherein, agIs the acceleration of gravity.
In summary, the energy function of the acceleration term is defined as:
wherein λ isAWeight of the energy function of the acceleration term, pA(. cndot.) represents a loss function.
2.3) obtaining the global coordinates (x, y, z) of the human skeleton nodes from the depth camera of the vision sensor, and adding a constraint term which minimizes the minimization between the measured value and the estimated value of the global position of the skeleton nodes. Defining the number of skeletal nodes for the location constraint term as npThe position of the skeleton node in the c coordinate system of the camera isThe number of cameras is nc. Estimation of bone node positionExpressed as:
in summary, the energy function of the position constraint is defined as:
wherein λ isPConstraining the term energy for positionWeight of the function, pP(. cndot.) represents a loss function.
2.4) there is a limit to consider the freedom of motion of the actual skeleton, so the attitude prior term E is usedprior(θ) to limit the unreasonable movement of the joint. Eprior(θ) is established by the existing human body posture estimation data set "TotalCapture (2017)" which contains 126000 frames of human body motion posture data.
First, k-means clustering is performed on all data in a data set, and a cluster type k is selected as 126000/100 as 1260. And then taking the mean value of all the clustering centers to obtain the mean value mu of the postures. Finally, carrying out statistical analysis on the original data to obtain the standard deviation sigma of the posture and the upper and lower limits theta of the freedom degree of each bone nodemaxAnd thetamin. Thus, the attitude prior term is defined as:
wherein,has a dimension of 36, does not limit the displacement and rotation of the root node, lambdapriorWeight of the energy function of the attitude prior term, pprior(. cndot.) represents a loss function.
2.5) in summary, an optimization problem is constructed:
wherein E isA、EP、EpriorThe loss function in (1) is set to ρ (x) log (1+ x), and the influence of the abnormal value is limited by increasing the penalty to the abnormal value in a set proportion. The weight of each optimization term is set to be lambdaR=0.1,λP=10,λA=0.005,λprior=0.0001。
Step 3) reading the position measurement values of the vision sensor at each momentAnd rotation measurement value R of inertial sensoriAnd acceleration measurement aiCalculating the sensor measurement value of each optimized item after the unified coordinate systemAnd the estimated value
Step 4) solving a nonlinear least square optimization problem, wherein the optimal solution theta at each moment is the optimal rotation angle of each skeleton node of the human body at the current moment and a root skeleton node n1Obtaining the estimation of the human body posture at the current moment according to the established human body skeleton node kinematics model;
as shown in fig. 1, the steps 3) and 4) are repeatedly executed to finish the optimal estimation of the position and the rotation of the human joint point at each moment, so as to obtain the real-time human posture estimation based on the fusion of visual and inertial information.
The embodiments described in this specification are merely illustrative of implementations of the inventive concepts, which are intended for purposes of illustration only. The scope of the present invention should not be construed as being limited to the particular forms set forth in the examples, but rather as being defined by the claims and the equivalents thereof which can occur to those skilled in the art upon consideration of the present inventive concept.
Claims (7)
1. A human body posture estimation method based on visual and inertial information fusion is characterized by comprising the following steps:
step 1) establishing a kinematic model of each skeletal node of a human body, determining an optimized variable theta, and determining a homogeneous transformation matrix between a camera coordinate system c and a global coordinate system gRotation matrix between inertial coordinate system n and global coordinate system gInertial sensor i and corresponding bone coordinate system biA matrix of displacements betweenAnd a rotation matrix
Step 2) setting the output frequency of vision and inertia to be consistent, and restricting the rotation of the inertial sensor by ER(theta), acceleration constraint EA(theta), visual sensor position constraint EP(theta) and body pose prior constraints Eprior(theta) constructing an optimization problem for the optimization items, and setting the weight of each optimization item;
step 3) reading the position measurement value of the vision sensor at each momentAnd rotation measurement value R of inertial sensoriAnd acceleration measurement aiCalculating the sensor measurement value of each optimized item after the unified coordinate systemAnd the estimated value
Step 4) solving the nonlinear least square optimization problem to obtain the optimal solution theta at each moment, namely the optimal rotation angle of each skeleton node of the human body at the current moment and the root skeleton node n1Obtaining the estimation of the human body posture at the current moment according to the established human body skeleton node kinematics model;
and 5) repeatedly executing the steps 3) and 4) to complete the state estimation of each joint point of the human body at each moment, so as to obtain the real-time human body posture estimation based on the fusion of visual and inertial information.
2. The human body posture estimation method based on the fusion of visual and inertial information as claimed in claim 1, wherein in the step 1), the camera coordinate system c represents the coordinate system of the depth camera, the inertial coordinate system n represents the calibrated unified coordinate system of all the inertial sensors, and the global coordinate system g is aligned with the initial coordinate system of the bone node.
3. The human body posture estimation method based on the fusion of visual and inertial information according to claim 1 or 2, characterized in that in the step 2), the rotation constraint E isR(θ) established by the difference between the measured and estimated values of each IMU rotation matrix; the acceleration constraint EA(θ) established by minimizing a difference between the measured and estimated values of acceleration of each IMU; the position constraint EP(θ) established by minimizing the difference between the measured and estimated values of the global position of each bone node; the human body posture prior constraint Eprior(θ) is established by the existing body pose estimation dataset.
4. The human body posture estimation method based on the fusion of visual and inertial information as claimed in claim 1 or 2, characterized in that in the step 3), the human body posture estimation methodIs the position information of each joint point of the human body, R, read from a vision sensoriIs the rotation information read from an inertial gyroscope, aiIs the acceleration information read from an inertial accelerometer.
5. The human body posture estimation method based on visual and inertial information fusion of claim 1 or 2, characterized in that in the step 4), the root skeleton node n1Is positioned at the pelvic joint point of the human body.
6. The human body posture estimation method based on the fusion of the visual information and the inertial information as claimed in claim 1 or 2, characterized in that the process of the step 1) is:
1.1) human skeleton is defined as interconnected rigid bodies, the initial coordinate system B of the skeleton node is aligned with the global coordinate system g, and the number n of upper semi-body skeleton is definedb13, the bone nodes are respectively the left hand, the right hand, the left forearm, the right forearm, the left upper arm, the right upper arm, the left shoulder, the right shoulder, the spine 1-4 and the pelvis, wherein the pelvis is taken as a root bone node n1Sub-skeleton node nb(b ≧ 2) rotation matrices R all having a relative relation to its parent nodebAnd a relatively constant displacement tbEach bone has 3 rotational degrees of freedom, and the root node has a global displacement (x)1,y1,z1) By 42(d ═ 3+3 xn)b42) freedom degrees to express the motion of the whole upper body of the human body, 42 variables are recorded as a 42-dimensional vector theta which is used as an optimization variable of an optimization problem, and a homogeneous transformation matrix of each rigid skeleton under a global coordinate system is obtained by derivation through a forward kinematics formula
Wherein p (b) is the set of total bones;
1.2) respectively placing two visual sensors in front of a tester, wherein the distance between the visual sensors and the tester L is 2 meters, and obtaining translation matrixes of the two cameras to a global coordinate system g by using a Zhang Yong camera calibration methodAnd a rotation matrixFurther determining homogeneous transformation matrix of camera coordinate system c and global coordinate system g
1.3) placing the inertial sensor IMU at the global coordinate system g to align the inertial sensor coordinate system n with the global coordinate system g to obtain the output value of the inertial sensor, namely the rotation matrix between the inertial sensor coordinate system n and the global coordinate system gRepeating the above operations to obtain the i (i is 1,2,3,4,5) th inertial sensor coordinate system niA rotation matrix with the global coordinate system g
1.4) the IMU is worn on the corresponding skeletal points of the left hand, right hand, left forearm, right forearm, pelvic boneiWith a corresponding skeleton coordinate system biThere is no displacement therebetween, i.e.
At an initial time the tester makes a "T-pos" calibration pose, at which time the IMU is definediMeasured value of (A) is Ri_initialThen IMUiWith a corresponding skeleton coordinate system biA rotation matrix ofExpressed as:
7. the human body posture estimation method based on the fusion of the visual information and the inertial information as claimed in claim 6, wherein the process of the step 2) is as follows:
2.1)IMUicorresponding bone nodes are allThe difference between the measured value and the estimated value of the rotation matrix in the local coordinate system is used as the rotation term constraint of the IMU, and the measured value of the rotation matrix corresponding to the bone node is expressed as:
wherein R isiIs an IMUiThe estimated value of the rotation matrix corresponding to the bone node is expressed as:
wherein, P (b)i) Is a skeleton biA set of all parents;
in summary, the energy function of the rotation term is defined as:
wherein ψ (-) extracts the vector part, λ, of the rotation matrix quaternion expression methodRWeight of the energy function of the rotation term, pR(. represents a loss function;
2.2) minimizing IMUiAcceleration measurement aiAnd the error between the estimated value and the acceleration estimated value is used as an acceleration constraint term of the IMUExpressed as:
wherein,left side of equation (6)(t-1) represents the acceleration measurements in the global coordinate system using the acceleration constraints of the previous time at the current timeCalculated from the rotation information and the measured acceleration value of the previous frameExpressed as:
wherein, agIs the acceleration of gravity;
in summary, the energy function of the acceleration term is defined as:
wherein λ isAWeight of the energy function of the acceleration term, pA(. represents a loss function;
2.3) obtaining global coordinates (x, y, z) of human skeleton nodes from a depth camera of a visual sensor, adding a constraint term minimizing minimization between a measured value and an estimated value of the global position of the skeleton nodes, and defining the number of skeleton nodes for the position constraint term as npThe position of the skeleton node in the c coordinate system of the camera isThe number of cameras is ncEstimation of bone node positionExpressed as:
in summary, the energy function of the position constraint is defined as:
wherein λ isPWeight of the energy function of the position constraint term, pP(. represents a loss function;
2.4) there is a limit to consider the freedom of motion of the actual skeleton, so the attitude prior term E is usedprior(theta) to limit unwanted movement of the joint, Eprior(theta) is established by an existing human body posture estimation data set 'TotalCapture (2017)', wherein 126000 frames of human body motion posture data are contained;
firstly, carrying out k-means clustering on all data in a data set, selecting a clustering type k which is 126000/100 which is 1260, then, taking a mean value of all clustering centers to obtain a mean value mu of a posture, and finally, carrying out statistical analysis on original data to obtain a standard deviation sigma of the posture and upper and lower freedom degree limits theta of each bone nodemaxAnd thetaminThus, the attitude prior term is defined as:
wherein,has a dimension of 36, does not limit the displacement and rotation of the root node, lambdapriorWeight of the energy function of the attitude prior term, pprior(. represents a loss function;
2.5) in summary, an optimization problem is constructed:
wherein E isA、EP、EpriorThe loss function in (1) is set as rho (x) log (1+ x), the influence of the abnormal value is limited by increasing the punishment to the abnormal value according to the set proportion, and the weight of each optimization term is set as lambdaR=0.1,λP=10,λA=0.005,λprior=0.0001。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110422431.7A CN113158459A (en) | 2021-04-20 | 2021-04-20 | Human body posture estimation method based on visual and inertial information fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110422431.7A CN113158459A (en) | 2021-04-20 | 2021-04-20 | Human body posture estimation method based on visual and inertial information fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113158459A true CN113158459A (en) | 2021-07-23 |
Family
ID=76868924
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110422431.7A Pending CN113158459A (en) | 2021-04-20 | 2021-04-20 | Human body posture estimation method based on visual and inertial information fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113158459A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114332912A (en) * | 2021-11-22 | 2022-04-12 | 清华大学 | Human motion capture and joint stress analysis method based on IMU |
CN114396936A (en) * | 2022-01-12 | 2022-04-26 | 上海交通大学 | Method and system for estimating attitude of inertia and magnetic sensor based on polynomial optimization |
CN114627490A (en) * | 2021-12-15 | 2022-06-14 | 浙江工商大学 | Multi-person attitude estimation method based on inertial sensor and multifunctional camera |
CN114742889A (en) * | 2022-03-16 | 2022-07-12 | 北京工业大学 | Human body dance action detection and correction method based on nine-axis attitude sensor and machine vision |
CN116912948A (en) * | 2023-09-12 | 2023-10-20 | 南京硅基智能科技有限公司 | Training method, system and driving system for digital person |
US11809616B1 (en) | 2022-06-23 | 2023-11-07 | Qing Zhang | Twin pose detection method and system based on interactive indirect inference |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102435188A (en) * | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | Monocular vision/inertia autonomous navigation method for indoor environment |
CN104501814A (en) * | 2014-12-12 | 2015-04-08 | 浙江大学 | Attitude and position estimation method based on vision and inertia information |
CN106052584A (en) * | 2016-05-24 | 2016-10-26 | 上海工程技术大学 | Track space linear shape measurement method based on visual and inertia information fusion |
CN107687850A (en) * | 2017-07-26 | 2018-02-13 | 哈尔滨工业大学深圳研究生院 | A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit |
CN108731672A (en) * | 2018-05-30 | 2018-11-02 | 中国矿业大学 | Coalcutter attitude detection system and method based on binocular vision and inertial navigation |
CN110100151A (en) * | 2017-01-04 | 2019-08-06 | 高通股份有限公司 | The system and method for global positioning system speed is used in vision inertia ranging |
CN110327048A (en) * | 2019-03-11 | 2019-10-15 | 浙江工业大学 | A kind of human upper limb posture reconstruction system based on wearable inertial sensor |
CN110345944A (en) * | 2019-05-27 | 2019-10-18 | 浙江工业大学 | Merge the robot localization method of visual signature and IMU information |
CN110375738A (en) * | 2019-06-21 | 2019-10-25 | 西安电子科技大学 | A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method |
CN110530365A (en) * | 2019-08-05 | 2019-12-03 | 浙江工业大学 | A kind of estimation method of human posture based on adaptive Kalman filter |
CN110617814A (en) * | 2019-09-26 | 2019-12-27 | 中国科学院电子学研究所 | Monocular vision and inertial sensor integrated remote distance measuring system and method |
CN111222437A (en) * | 2019-12-31 | 2020-06-02 | 浙江工业大学 | Human body posture estimation method based on multi-depth image feature fusion |
CN111241936A (en) * | 2019-12-31 | 2020-06-05 | 浙江工业大学 | Human body posture estimation method based on depth and color image feature fusion |
CN111578937A (en) * | 2020-05-29 | 2020-08-25 | 天津工业大学 | Visual inertial odometer system capable of optimizing external parameters simultaneously |
CN111595333A (en) * | 2020-04-26 | 2020-08-28 | 武汉理工大学 | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion |
-
2021
- 2021-04-20 CN CN202110422431.7A patent/CN113158459A/en active Pending
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102435188A (en) * | 2011-09-15 | 2012-05-02 | 南京航空航天大学 | Monocular vision/inertia autonomous navigation method for indoor environment |
CN104501814A (en) * | 2014-12-12 | 2015-04-08 | 浙江大学 | Attitude and position estimation method based on vision and inertia information |
CN106052584A (en) * | 2016-05-24 | 2016-10-26 | 上海工程技术大学 | Track space linear shape measurement method based on visual and inertia information fusion |
CN110100151A (en) * | 2017-01-04 | 2019-08-06 | 高通股份有限公司 | The system and method for global positioning system speed is used in vision inertia ranging |
CN107687850A (en) * | 2017-07-26 | 2018-02-13 | 哈尔滨工业大学深圳研究生院 | A kind of unmanned vehicle position and orientation estimation method of view-based access control model and Inertial Measurement Unit |
CN108731672A (en) * | 2018-05-30 | 2018-11-02 | 中国矿业大学 | Coalcutter attitude detection system and method based on binocular vision and inertial navigation |
CN110327048A (en) * | 2019-03-11 | 2019-10-15 | 浙江工业大学 | A kind of human upper limb posture reconstruction system based on wearable inertial sensor |
CN110345944A (en) * | 2019-05-27 | 2019-10-18 | 浙江工业大学 | Merge the robot localization method of visual signature and IMU information |
CN110375738A (en) * | 2019-06-21 | 2019-10-25 | 西安电子科技大学 | A kind of monocular merging Inertial Measurement Unit is synchronous to be positioned and builds figure pose calculation method |
CN110530365A (en) * | 2019-08-05 | 2019-12-03 | 浙江工业大学 | A kind of estimation method of human posture based on adaptive Kalman filter |
CN110617814A (en) * | 2019-09-26 | 2019-12-27 | 中国科学院电子学研究所 | Monocular vision and inertial sensor integrated remote distance measuring system and method |
CN111222437A (en) * | 2019-12-31 | 2020-06-02 | 浙江工业大学 | Human body posture estimation method based on multi-depth image feature fusion |
CN111241936A (en) * | 2019-12-31 | 2020-06-05 | 浙江工业大学 | Human body posture estimation method based on depth and color image feature fusion |
CN111595333A (en) * | 2020-04-26 | 2020-08-28 | 武汉理工大学 | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion |
CN111578937A (en) * | 2020-05-29 | 2020-08-25 | 天津工业大学 | Visual inertial odometer system capable of optimizing external parameters simultaneously |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114332912A (en) * | 2021-11-22 | 2022-04-12 | 清华大学 | Human motion capture and joint stress analysis method based on IMU |
CN114627490A (en) * | 2021-12-15 | 2022-06-14 | 浙江工商大学 | Multi-person attitude estimation method based on inertial sensor and multifunctional camera |
CN114627490B (en) * | 2021-12-15 | 2024-10-18 | 浙江工商大学 | Multi-person gesture estimation method based on inertial sensor and multifunctional camera |
CN114396936A (en) * | 2022-01-12 | 2022-04-26 | 上海交通大学 | Method and system for estimating attitude of inertia and magnetic sensor based on polynomial optimization |
CN114396936B (en) * | 2022-01-12 | 2024-03-12 | 上海交通大学 | Polynomial optimization-based inertial and magnetic sensor attitude estimation method and system |
CN114742889A (en) * | 2022-03-16 | 2022-07-12 | 北京工业大学 | Human body dance action detection and correction method based on nine-axis attitude sensor and machine vision |
US11809616B1 (en) | 2022-06-23 | 2023-11-07 | Qing Zhang | Twin pose detection method and system based on interactive indirect inference |
CN116912948A (en) * | 2023-09-12 | 2023-10-20 | 南京硅基智能科技有限公司 | Training method, system and driving system for digital person |
CN116912948B (en) * | 2023-09-12 | 2023-12-01 | 南京硅基智能科技有限公司 | Training method, system and driving system for digital person |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113158459A (en) | Human body posture estimation method based on visual and inertial information fusion | |
CN110530365B (en) | Human body attitude estimation method based on adaptive Kalman filtering | |
CN105856230B (en) | A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity | |
CN107516326B (en) | Robot positioning method and system fusing monocular vision and encoder information | |
CN111595333A (en) | Modularized unmanned vehicle positioning method and system based on visual inertial laser data fusion | |
CN103733227B (en) | Three-dimensional object modelling fitting & tracking | |
CN111156984A (en) | Monocular vision inertia SLAM method oriented to dynamic scene | |
CN112556719B (en) | Visual inertial odometer implementation method based on CNN-EKF | |
JP6145072B2 (en) | Sensor module position acquisition method and apparatus, and motion measurement method and apparatus | |
CN112401369B (en) | Body parameter measurement method, system, device, chip and medium based on human body reconstruction | |
CN116205947A (en) | Binocular-inertial fusion pose estimation method based on camera motion state, electronic equipment and storage medium | |
CN111489392B (en) | Single target human motion posture capturing method and system in multi-person environment | |
CN110609621B (en) | Gesture calibration method and human motion capture system based on microsensor | |
CN112750198A (en) | Dense correspondence prediction method based on non-rigid point cloud | |
Zhang et al. | Human motion capture based on kinect and imus and its application to human-robot collaboration | |
CN108900775B (en) | Real-time electronic image stabilization method for underwater robot | |
CN114485637A (en) | Visual and inertial mixed pose tracking method of head-mounted augmented reality system | |
CN112131928A (en) | Human body posture real-time estimation method based on RGB-D image feature fusion | |
CN115661862A (en) | Pressure vision convolution model-based sitting posture sample set automatic labeling method | |
CN109931940B (en) | Robot positioning position reliability assessment method based on monocular vision | |
CN111241936A (en) | Human body posture estimation method based on depth and color image feature fusion | |
Henning et al. | Bodyslam++: Fast and tightly-coupled visual-inertial camera and human motion tracking | |
CN113256789A (en) | Three-dimensional real-time human body posture reconstruction method | |
CN111222437A (en) | Human body posture estimation method based on multi-depth image feature fusion | |
JP6205387B2 (en) | Method and apparatus for acquiring position information of virtual marker, and operation measurement method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |