CN114627490A - Multi-person attitude estimation method based on inertial sensor and multifunctional camera - Google Patents

Multi-person attitude estimation method based on inertial sensor and multifunctional camera Download PDF

Info

Publication number
CN114627490A
CN114627490A CN202111535889.XA CN202111535889A CN114627490A CN 114627490 A CN114627490 A CN 114627490A CN 202111535889 A CN202111535889 A CN 202111535889A CN 114627490 A CN114627490 A CN 114627490A
Authority
CN
China
Prior art keywords
joint
person
multifunctional
human body
inertial sensor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111535889.XA
Other languages
Chinese (zh)
Inventor
杨文武
蔡佳航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Gongshang University
Original Assignee
Zhejiang Gongshang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Gongshang University filed Critical Zhejiang Gongshang University
Priority to CN202111535889.XA priority Critical patent/CN114627490A/en
Publication of CN114627490A publication Critical patent/CN114627490A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • G01C21/1656Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments with passive imaging devices, e.g. cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Automation & Control Theory (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a multi-person posture estimation method based on an inertial sensor and a multifunctional camera, which comprises the following steps: s1, performing infrared ink marking on the multi-target character; s2, shooting and acquiring multi-view images; s3, reconstructing a human body 3D posture; and S4, optimizing to obtain accurate 3D postures of all target characters. The method comprises the steps of obtaining labels of a plurality of target characters by utilizing an infrared function in a multifunctional camera, extracting orientation information of joints corresponding to the target characters from an inertial sensor according to the matching relation between the target characters and the inertial sensor, establishing a human body kinematics model, optimizing motion parameters by utilizing the orientation information of the joints, calculating more accurate three-dimensional postures of the target characters according to the optimized motion parameters, and remarkably improving the accuracy of multi-person three-dimensional posture estimation.

Description

Multi-person attitude estimation method based on inertial sensor and multifunctional camera
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to a multi-person posture estimation method based on an inertial sensor and a multifunctional camera.
Background
Human pose estimation is a fundamental problem in computer vision, and it is literally mainly studied to estimate the position of a human pose, and usually for a specific pose of a person, the human pose can be represented by 17 key points of the person, and the 17 key points include: 0: a nose and a nose 1: left eye, 2: right eye, 3: left ear, 4: right ear, 5: left shoulder, 6: right shoulder, 7: left elbow, 8: right elbow, 9: left wrist, 10: right wrist, 11: left crotch, 12: right crotch, 13: left knee, 14: right knee, 15: left ankle, 16: the right ankle, and with different scenes, the human posture estimation is also divided into more and more detailed, and at present, four basic tasks are commonly done in the academic world: 1. the method comprises the following steps of single-person posture estimation, 2 multi-person posture estimation, 3 human body posture tracking and 4 multi-person three-dimensional human body posture estimation, wherein the four tasks are sequentially deepened, and the latter task is generally established on the basis of the former task.
The traditional method for estimating the posture of the multi-person three-dimensional human body is as follows: 1. inputting a multi-view image; 2. performing two-dimensional attitude estimation on the multi-view images respectively, and detecting the positions of 2D joint points; 3. respectively finding out the positions of the same person under different visual angles, and performing cross-visual angle matching; 4. and reconstructing the three-dimensional key point positions of the target person by using a triangulation method. However, the traditional method based on pure computer vision reconstructs three-dimensional key point positions based on two-dimensional detection results, and the two-dimensional detection results suffer from the problems of mutual occlusion, self occlusion, complex background interference and the like, so that the result of three-dimensional human body posture estimation cannot meet the requirements of certain scenes.
As technology evolves, inertial sensors begin to slowly move into the line of sight of researchers. Researchers have begun to use the information from inertial sensors to enhance the results of three-dimensional testing. For example, in order to study the motion of the human body, most of the articles about human body posture estimation, which has 6 degrees of freedom in three-dimensional space including the position and direction of each limb, simplify the human body into a multi-rigid body system. By extracting the information of electronic devices such as an accelerometer, a gyroscope, a magnetometer and the like in the inertial sensor, the position and the direction of the limb can be obtained. By applying the information to the human body kinematic model, the joint point position of the target person in the three-dimensional space can be obtained.
The invention discloses a human body posture estimation method combining an attention mechanism and a partial affinity domain field, which is disclosed in Chinese patent literature, has the publication number of CN111860216A and the publication date of 2020-10-30, and firstly obtains a public data set for estimating the human body posture; inputting the image to be detected in the public data set into the hourglass stack network, and obtaining a human body global attention diagram through the multilingual environment attention model; inputting a human body global attention diagram into a multi-stage double-branch network; a loss function is adopted to guide the multi-stage double-branch network to predict and iterate the human body global attention diagram until the multi-stage double-branch network converges to obtain a human body local attention diagram and a partial affinity domain field; and finally, clustering the human body local attention diagram and part of the affinity domain field to obtain a human body posture estimation result in the image to be detected. The human body posture estimation method combining the attention mechanism and the partial affinity domain field solves the problem that the human body posture estimation method in the prior art is poor in robustness on complex continuous postures. The invention does not relate to a method of combining an inertial sensor with a multifunctional infrared camera.
Disclosure of Invention
The invention solves the problem that the joint point position of a target person three-dimensionally reconstructed by the existing pure computer vision method is inaccurate, and provides a multi-person posture estimation method based on an inertial sensor and a multifunctional camera.
In order to achieve the purpose, the invention adopts the following technical scheme: a multi-person posture estimation method based on an inertial sensor and a multifunctional camera comprises the following steps:
s1, performing infrared ink marking on the multi-target character;
s2, shooting and acquiring multi-view images;
s3, reconstructing a human body 3D posture;
and S4, optimizing to obtain accurate 3D postures of all target characters.
According to the invention, after calibration and calibration are carried out on the sensor equipment and infrared ink marking is carried out, an image is obtained by shooting through a multifunctional camera, sensor information is obtained, so that the 3D posture of a human body is reconstructed, the orientation information of joints is calculated, and finally, after a model is established, the more accurate three-dimensional posture of a target person is calculated according to the optimized motion parameters. According to the invention, the joint orientations of a plurality of target characters are obtained through the inertial sensor, the three-dimensional posture is reconstructed, and the accuracy of the three-dimensional posture of a plurality of people is improved through a visual fusion method.
Preferably, the step S1 includes the steps of:
s11, respectively placing inertial sensors on each joint of a human body, calibrating an accelerometer, a three-axis gyroscope and a magnetometer of the inertial sensors, and setting frequency;
s12, calibrating the multifunctional camera to obtain internal parameters and external parameters, and constructing a world coordinate system;
and S13, marking the multiple target characters with infrared ink in different ways.
In the invention, each joint of the human body mainly comprises the head, the shoulders, the elbows, the wrists, the thighs, the knees and the ankles of the human body, and the inertial sensor and the multifunctional camera are initially arranged, so that the detection is convenient.
Preferably, the step S2 includes the steps of:
s21, obtaining multi-view images at each moment through synchronous shooting of the multifunctional camera;
and S22, acquiring acceleration and angular velocity information through the inertial sensor.
In the present invention, information such as angular velocity is obtained based on physical characteristics of the inertial sensor, so that the orientation information is calculated.
Preferably, the step S3 includes the steps of:
s31, reconstructing a human body 3D posture according to the multi-view images;
s32, calculates orientation information of the corresponding joint from the inertial sensor information.
In the invention, the human body 3D posture is reconstructed from the detected multi-view images, and the orientation information of the corresponding joints needs the detection information of the inertial sensor.
Preferably, the step S4 includes the steps of:
s41, obtaining the matching relation between the target person and the inertial sensor according to the matching relation between the target person and the infrared tag and the matching relation between the inertial sensor and the infrared tag, and further obtaining the orientation information of the corresponding joint of the target person;
and S42, fusing the joint orientation and the 3D joint point position information of each target person, and optimizing to obtain the complete 3D posture of each target person.
According to the method and the device, the orientation information of the joints of the target person obtained according to the inertial sensor is fused with the corresponding 3D position information according to the correlation between the labeling information and the frequency of the inertial sensor, and the accurate 3D postures of all the target persons are obtained through optimization.
Preferably, the step S31 includes the steps of:
s311, detecting the position of a two-dimensional joint point of a human body and an infrared ink label corresponding to a target person by using a convolutional neural network for the input multi-view image;
s312, obtaining two-dimensional attitude matching relations among all visual angles through polar constraint of a plurality of multifunctional cameras, wherein the matching relations
Figure BDA0003412553550000031
The following were used:
Figure BDA0003412553550000032
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003412553550000033
representing the two-dimensional detected joint point coordinates at time t,
Figure BDA0003412553550000034
represents the pseudo-inverse matrix of the multi-function camera V,
Figure BDA0003412553550000035
is the imaging center of the multifunctional camera v, and the symbol-represents the homogeneous coordinate;
s313, reconstructing the 3D joint point position of the target person by triangulation in a world coordinate system formed by a plurality of multifunctional cameras, specifically as follows:
Figure BDA0003412553550000036
wherein w is the joint point prediction confidence coefficient, A is a matrix formed by camera parameters and two-dimensional coordinates of the joint points,
Figure BDA0003412553550000037
is the three-dimensional joint point coordinates to be solved.
According to the method, a multi-view image is processed to obtain the position of a two-dimensional joint point of a human body and an infrared ink label corresponding to a target person, cross-view association of the target person is achieved according to polar line constraint of a camera, and finally the position of a 3D joint point of the target person is reconstructed by triangulation.
Preferably, the step S42 includes the steps of:
s421, for each target person, giving joint orientation information and 3D joint point position information thereof;
s422, establishing a human body kinematics model;
s423, optimizing the kinematic parameters of the kinematic model according to the kinematic model such that the position and joint orientation of the joint point and the given position and orientation deviation are minimal in the 3D pose calculated from the kinematic parameters.
In the invention, the kinematic model comprises a plurality of human joints, one joint is set as a heel joint, the kinematic parameters of the heel joint comprise translation and rotation, and the kinematic parameters of other joints correspond to rotation relative to a father joint of the other joints.
The invention has the beneficial effects that: the method of combining the inertial sensor with the multifunctional infrared camera reconstructs the three-dimensional postures of a plurality of people according to the combination of the sensor information and the infrared information, and the three-dimensional postures of the plurality of people obtained based on the visual information are fused, so that the accuracy of the estimation of the three-dimensional postures of the plurality of people is obviously improved.
Drawings
FIG. 1 is a flow chart of the present invention;
fig. 2 is a schematic diagram of the matching calculation in step S3.
Detailed Description
Example (b):
the embodiment provides a multi-person posture estimation method based on an inertial sensor and a multifunctional camera, and with reference to fig. 1, the method mainly includes four steps: the method comprises the following steps of marking infrared ink of a multi-target character, wherein the method comprises the following three substeps: one is as follows: placing inertial sensors on the head, shoulders, elbows, wrists, thighs, knees and ankles of a human body, calibrating the inertial sensors, including calibrating accelerometers, magnetometers and magnetometers in the inertial sensors, and presetting frequencies; and the second step is as follows: calibrating the multifunctional camera and establishing a world coordinate system; and thirdly: labeling a plurality of target figures by using infrared ink; the inertial sensor and the multifunctional camera are initially set, and detection is convenient.
Acquiring a multi-view image, and firstly acquiring the multi-view image according to the multifunctional camera; then obtaining angular velocity and acceleration data of the sensor according to the inertial sensor; for calculation of orientation information.
Reconstructing a human body 3D posture, detecting an infrared ink label corresponding to a human body target figure and a human body two-dimensional joint point position based on the multi-view image, and specifically detecting by adopting a convolutional neural network; acquiring a two-dimensional posture matching relation between different visual angles, specifically acquiring the two-dimensional posture matching relation through polar constraint of a plurality of cameras, sequentially realizing cross-visual angle association of a target character, and referring to a matching calculation schematic diagram of fig. 2, wherein the specific matching relation
Figure BDA0003412553550000041
Comprises the following steps:
Figure BDA0003412553550000051
in the above-mentioned formula, the first,
Figure BDA0003412553550000052
for two-dimensional detection of the coordinates of the joint point at time t,
Figure BDA0003412553550000053
is a pseudo-inverse matrix of the multi-function camera V,
Figure BDA0003412553550000054
the symbol-is the homogeneous coordinate of the imaging center of the multifunctional camera v; and then obtaining the 3D joint point position of the target person, wherein the formula is as follows:
Figure BDA0003412553550000055
in the above equation, w represents the joint prediction confidence, a represents the matrix composed of the camera parameters and the two-dimensional coordinates of the joint,
Figure BDA0003412553550000056
representing the three-dimensional joint point coordinates to be solved; after the above steps are completed, the orientation information of the joint can be calculated, specifically:
Figure BDA0003412553550000057
in the above equation, /)IMU(Jm,Jn) Indicating orientation information of the corresponding joint, Om,nIs the direction vector of the limb, J, obtained from inertial sensor informationm,JnAnd respectively representing each three-dimensional position in the world coordinate system, and combining the estimated posture limb direction with the limb direction obtained by the inertial sensor to obtain the final limb direction.
Step four: obtaining a more accurate 3D gesture; the method mainly comprises the following two steps: firstly, the corresponding object character is obtainedOrientation information of joints, orientation information ori of joints corresponding to the target personPThe method comprises the following specific steps:
Figure BDA0003412553550000058
in the above equation, P represents a target person in a three-dimensional space,
Figure BDA0003412553550000059
where v represents the viewing angle v, t represents the time, i represents the ith label, p represents the target person in the viewing angle,
Figure BDA00034125535500000510
integrally representing the matching relation between the target person and the infrared tag at the moment t and in different visual angles, Sm=f(Ln) Representing the matching relation between the mth inertial sensor and the infrared ink label, wherein the relation is set before the experiment, and obtaining the joint orientation information of the target person through a functional relation; secondly, the joint orientation information of each target person and the position information of the 3D joint points are fused, then a human body kinematic model is created, the model mainly comprises a plurality of human body joints, one joint is called as a root joint, the motion parameters of the root joint comprise translation and rotation, and the motion parameters of other joints correspond to rotation relative to a father joint of the root joint; and finally, optimizing the parameters of the kinematic model, and enabling the position and the orientation of the joint point to have minimum deviation from the given position and orientation through a series of calculation, thereby obtaining a more accurate 3D posture.
According to the invention, after the sensor equipment is calibrated and the infrared ink is marked, the multifunctional camera shoots to obtain an image and acquire sensor information, so that the 3D posture of a human body is reconstructed, the orientation information of a joint is calculated, finally, a model is established, and the more accurate three-dimensional posture of a target character is calculated according to the optimized motion parameters. According to the invention, the joint orientations of a plurality of target characters are obtained through the inertial sensor, the three-dimensional posture is reconstructed, and the accuracy of the three-dimensional posture of a plurality of people is improved through a visual fusion method.
In the invention, camera calibration specifically refers to mapping a three-dimensional object in the real world and a two-dimensional object corresponding to a camera image; the infrared ink marking specifically utilizes infrared ink liquid medicine to mark a target person, and the mark can only be seen by an infrared camera; the epipolar constraint is to obtain key point information and camera information through a convolutional neural network according to different visual angles and associate the same people appearing in different cameras; human kinematics model: the whole body model is generally built into a tree structure, the movement of the body part can be regarded as the movement of a child node relative to a father node, the posture information of the father node can be transmitted to the child node through a movement chain model, and the posture information of each child node can be calculated in a recursion mode from the root node by selecting a certain joint as the root node of the whole body movement.
The above embodiments are further illustrated and described in order that the present invention may be understood, and not by way of limitation, as modifications, equivalents, and improvements made within the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (7)

1. A multi-person posture estimation method based on an inertial sensor and a multifunctional camera is characterized by comprising the following steps:
s1, carrying out infrared ink labeling on the multi-target character;
s2, shooting and acquiring multi-view images;
s3, reconstructing a human body 3D posture;
and S4, optimizing to obtain accurate 3D postures of all target characters.
2. The multi-person pose estimation method based on inertial sensors and multifunctional cameras as claimed in claim 1, wherein said step S1 comprises the following steps:
s11, respectively placing inertial sensors on each joint of a human body, calibrating an accelerometer, a three-axis gyroscope and a magnetometer of the inertial sensors, and setting frequency;
s12, calibrating the multifunctional camera to obtain internal parameters and external parameters, and constructing a world coordinate system;
and S13, marking the multiple target characters with infrared ink in different ways.
3. The multi-person pose estimation method based on inertial sensors and multifunctional cameras as claimed in claim 1, wherein said step S2 comprises the following steps:
s21, obtaining multi-view images at each moment through synchronous shooting of the multifunctional camera;
and S22, acquiring acceleration and angular velocity information through the inertial sensor.
4. The multi-person pose estimation method based on inertial sensors and multifunctional cameras according to claim 1, wherein the step S3 comprises the steps of:
s31, reconstructing a human body 3D posture according to the multi-view images;
s32, calculates orientation information of the corresponding joint from the inertial sensor information.
5. The multi-person pose estimation method based on inertial sensors and multifunctional cameras as claimed in claim 1, wherein said step S4 comprises the following steps:
s41, obtaining the matching relation between the target person and the inertial sensor according to the matching relation between the target person and the infrared tag and the matching relation between the inertial sensor and the infrared tag, and further obtaining the orientation information of the corresponding joint of the target person;
and S42, for each target character, fusing the joint orientation and the 3D joint point position information, and optimizing to obtain the complete 3D posture.
6. The multi-person pose estimation method based on inertial sensors and multifunctional cameras according to claim 4, wherein the step S31 comprises the steps of:
s311, detecting the position of a two-dimensional joint point of a human body and an infrared ink label corresponding to a target person by using a convolutional neural network for the input multi-view image;
s312, obtaining two-dimensional attitude matching relations among all visual angles through polar constraint of a plurality of multifunctional cameras, wherein the matching relations
Figure FDA0003412553540000021
The following were used:
Figure FDA0003412553540000022
wherein the content of the first and second substances,
Figure FDA0003412553540000023
representing the two-dimensional detected joint point coordinates at time t,
Figure FDA0003412553540000024
represents the pseudo-inverse matrix of the multi-function camera V,
Figure FDA0003412553540000025
is the imaging center of the multifunctional camera v, and the symbol-represents the homogeneous coordinate;
s313, reconstructing the 3D joint point position of the target person by triangulation in a world coordinate system formed by a plurality of multifunctional cameras, specifically as follows:
Figure FDA0003412553540000026
wherein w is the joint point prediction confidence coefficient, A is a matrix formed by camera parameters and two-dimensional coordinates of the joint points,
Figure FDA0003412553540000027
is the three-dimensional joint point coordinates to be solved.
7. The multi-person pose estimation method based on inertial sensors and multifunctional cameras according to claim 5, wherein the step S42 comprises the steps of:
s421, for each target person, giving joint orientation information and 3D joint point position information thereof;
s422, establishing a human body kinematics model;
s423, optimizing the kinematic parameters of the kinematic model according to the kinematic model such that the position and joint orientation of the joint point and the given position and orientation deviation are minimal in the 3D pose calculated from the kinematic parameters.
CN202111535889.XA 2021-12-15 2021-12-15 Multi-person attitude estimation method based on inertial sensor and multifunctional camera Pending CN114627490A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111535889.XA CN114627490A (en) 2021-12-15 2021-12-15 Multi-person attitude estimation method based on inertial sensor and multifunctional camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111535889.XA CN114627490A (en) 2021-12-15 2021-12-15 Multi-person attitude estimation method based on inertial sensor and multifunctional camera

Publications (1)

Publication Number Publication Date
CN114627490A true CN114627490A (en) 2022-06-14

Family

ID=81897822

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111535889.XA Pending CN114627490A (en) 2021-12-15 2021-12-15 Multi-person attitude estimation method based on inertial sensor and multifunctional camera

Country Status (1)

Country Link
CN (1) CN114627490A (en)

Similar Documents

Publication Publication Date Title
CN106679648B (en) Visual inertia combination SLAM method based on genetic algorithm
CN111881887A (en) Multi-camera-based motion attitude monitoring and guiding method and device
US7257237B1 (en) Real time markerless motion tracking using linked kinematic chains
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
Zhang et al. Ubiquitous human upper-limb motion estimation using wearable sensors
WO2017133009A1 (en) Method for positioning human joint using depth image of convolutional neural network
Tao et al. A novel sensing and data fusion system for 3-D arm motion tracking in telerehabilitation
CN110327048B (en) Human upper limb posture reconstruction system based on wearable inertial sensor
Dorfmüller-Ulhaas Robust optical user motion tracking using a kalman filter
US10445930B1 (en) Markerless motion capture using machine learning and training with biomechanical data
Chen et al. Real-time human motion capture driven by a wireless sensor network
Surer et al. Methods and technologies for gait analysis
CN109242887A (en) A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
CN110609621B (en) Gesture calibration method and human motion capture system based on microsensor
CN109284006A (en) A kind of human motion capture device and method
WO2024094227A1 (en) Gesture pose estimation method based on kalman filtering and deep learning
Tao et al. Integration of vision and inertial sensors for home-based rehabilitation
CN109453505A (en) A kind of multi-joint method for tracing based on wearable device
CN108621164A (en) Taiji push hands machine people based on depth camera
Lin et al. Using hybrid sensoring method for motion capture in volleyball techniques training
CN114627490A (en) Multi-person attitude estimation method based on inertial sensor and multifunctional camera
JP6205387B2 (en) Method and apparatus for acquiring position information of virtual marker, and operation measurement method
Li et al. 3D human pose tracking approach based on double Kinect sensors
Qi et al. 3D human pose tracking approach based on double Kinect sensors
Jin et al. A wearable robotic device for assistive navigation and object manipulation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination