CN113421286B - Motion capturing system and method - Google Patents

Motion capturing system and method Download PDF

Info

Publication number
CN113421286B
CN113421286B CN202110786864.0A CN202110786864A CN113421286B CN 113421286 B CN113421286 B CN 113421286B CN 202110786864 A CN202110786864 A CN 202110786864A CN 113421286 B CN113421286 B CN 113421286B
Authority
CN
China
Prior art keywords
human body
joint
motion capture
point
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110786864.0A
Other languages
Chinese (zh)
Other versions
CN113421286A (en
Inventor
李子旭
杜华
王语堂
岳宗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Future Tianyuan Technology Development Co ltd
Original Assignee
Beijing Future Tianyuan Technology Development Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Future Tianyuan Technology Development Co ltd filed Critical Beijing Future Tianyuan Technology Development Co ltd
Priority to CN202110786864.0A priority Critical patent/CN113421286B/en
Publication of CN113421286A publication Critical patent/CN113421286A/en
Application granted granted Critical
Publication of CN113421286B publication Critical patent/CN113421286B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Abstract

The invention provides a motion capture system and a motion capture method, wherein the system comprises at least one motion capture device, a human body target detector, a human body target tracker, a human body motion posture calculator, a human body positioning calculator, a self-positioning calculator and a human body positioning converter. The motion capture method provided by the invention can accurately realize the positioning and motion gesture calculation of the human body in the ground coordinate system by at least two optical cameras, can ensure the accuracy of the motion capture result under the condition of limited indoor and outdoor shooting scenes, and greatly improves the practicability, convenience, accuracy, stability and applicability of the motion capture technology.

Description

Motion capturing system and method
Technical Field
The invention belongs to the technical field of human motion capture, and particularly relates to a portable multi-camera motion capture system and method.
Background
The human body motion capture is to calculate the motion track and the space gesture of the human body through the vision principle by using the human body image sequence acquired by the computer. Motion capture techniques are classified into two main categories, marked point and unmarked point, according to the principle.
The mark points are important components of the mark point type motion capture system, are used for marking joints of human bodies or target objects, acquire motion information of the joints of the human bodies through collection and calculation of the mark points, and have the characteristics of high speed, high precision and high reliability. The mark point type motion capture is the most widely adopted method in multiphase motion capture, and is divided into two types of two-dimensional marks and three-dimensional marks according to different mark point forms, wherein the three-dimensional marks are mainly applied to indoor motion capture scenes according to different application scenes in practice, and the two-dimensional marks are commonly used in outdoor motion capture scenes. At present, the multi-camera mark point motion capture technology is widely applied to the fields of film and television CG manufacture, industrial measurement, virtual reality, robots and the like.
In recent years, due to the breakthrough of a deep learning algorithm, the motion capture technology is effectively simplified, and 3D human body model data can be directly generated based on 2D images/videos by using a single camera, so that the single-camera AI motion capture algorithm has wide application space in the fields of entertainment consumption, virtual reality, physical training and the like.
In order to accurately calculate the spatial position of the mark point, the multi-camera mobile capturing system needs to set up a plurality of image acquisition sensors at different positions during shooting, improves the reconstruction quality of the mark point by increasing the capturing view angle, and is applicable to the condition of complex illumination changes such as outdoors, but the multi-camera mobile capturing system cannot be used when the shooting space is limited due to the fact that the number of the image acquisition sensors is too many. The single-camera motion capture algorithm only depends on 2D images to generate a 3D human body model, the generated human body 3D model only can roughly give out the spatial posture and the 3D position of human body joints, the spatial posture is not accurate enough, and the 3D position cannot express the real position of a human body in a ground coordinate system, so that the application of a high-precision motion capture system, such as film and television CG production, is difficult to meet.
Disclosure of Invention
Compared with a single-phase motion capture system, the portable multi-view motion capture system has the advantages that the pose of the camera can be positioned in a single frame and the human motion pose is optimized, and the multi-view motion capture system also carries the size information.
Taking a binocular-structured portable motion capture system as an example, the invention utilizes the advantages of the binocular-structured portable motion capture system and the AI motion capture result of the single camera, and is very beneficial and feasible in calculating the spatial postures and the 3D positions of all joints of a human body. Meanwhile, optionally, the human body mark points are used as auxiliary information to further improve the accuracy and stability of the motion capture system.
The invention aims to provide a motion capture system which can accurately calculate the spatial pose and 3D coordinates of each joint of a human body under the conditions of marked point type and unmarked point type.
It is another object of the present invention to provide a motion capture method.
The specific technical scheme of the invention is as follows.
A motion capture system, the system comprising:
a motion capture device comprising at least two image acquisition sensors, the motion capture device providing images through the image acquisition sensors;
the human body target detector comprises at least one human body joint point detector and at least one human body mark point detector, wherein the human body joint point detector is used for detecting a human body in an image acquired by the motion capture equipment to obtain a 2D joint characteristic point set of the human body on the image, and the human body mark point detector is used for detecting a person in the image acquired by the motion capture equipment to obtain a 2D mark characteristic point set of the surface of the person on the image;
the human body target tracker comprises at least one human body joint point tracker and at least one human body mark point tracker, wherein the human body joint point tracker is used for tracking the 2D joint feature point set detected by the human body joint point detector to obtain 2D joint feature point set information of different moments of a human body, and the human body mark point tracker is used for tracking the 2D mark feature point set detected by the human body mark point detector to obtain 2D mark feature point set information of each joint of the human body at different moments;
the human body motion gesture calculator is used for performing motion gesture calculation on the human body detected by the human body target detector to obtain a space gesture angle and a 3D coordinate of each joint of the human body;
the human body positioning calculator comprises at least one human body mark point positioner, at least one human body joint point position optimizer and at least one human body joint point positioner, wherein the human body mark point positioner is used for calculating 3D coordinates of all mark characteristic points of a human body under a first coordinate system, the human body joint point position optimizer is used for optimizing the spatial pose of the human body joint point calculated by the human body action pose calculator, and the human body joint point positioner is used for calculating 3D coordinates of all joints of the human body under the first coordinate system;
the self-positioning calculator is used for positioning calculation of the motion capture device, namely calculating a 3D coordinate and an attitude angle of the motion capture device under a second coordinate system;
and the human body positioning converter is used for down-converting the human body space pose from the first coordinate system into the second coordinate system.
Further, the motion capture device is a portable device that is easy to install and remove.
Further, a wireless communication device for exchanging data is included.
Further, the motion capture device includes at least one light source for illuminating a scene, an environment, or a motion capture object.
Further, the motion capture device includes at least one electronic chip processor that performs all or a subset of the following functions: a human body target detector, a human body target tracker, a human body action gesture calculator, a human body positioning calculator, a self-positioning calculator and a human body positioning converter.
Further, the motion capture device includes at least one inertial sensor that provides pose information for assisting the motion capture device in self-positioning.
Further, the motion capture device includes at least one depth image sensor that can provide pose information for assisting the motion capture device in self-positioning.
The invention also provides a motion capturing method. Which comprises the following steps:
1) Acquiring image sequences acquired by at least two image acquisition sensors through the motion capture device;
2) Performing human body detection and human body mark point detection on each frame of image acquired by the image acquisition sensor through a human body target detector to obtain region segmentation, a 2D joint feature point set and a 2D mark feature point set of a human body on the image;
3) Tracking the 2D joint characteristic point set and the 2D mark characteristic point set obtained by detecting the human body target through a human body target tracker respectively to obtain the 2D joint characteristic point set of the same human body at all times of different visual angles and the 2D mark characteristic point set of the joint part of the same human body at all times of different visual angles;
4) Carrying out regression calculation on the human body 3D model and the action gesture by using a single frame image through a human body action gesture solver to obtain a human body model and a space gesture angle of each joint under a first coordinate system;
5) The self-positioning calculator is used for performing space self-positioning calculation by utilizing multi-frame images acquired by the acquisition image sensor under different space viewing angles to acquire the pose of the motion capture device in a second coordinate system, wherein the pose comprises a 3D coordinate and a pose angle;
6) Performing space positioning calculation on the detected person through a human body positioning calculator to obtain 3D coordinates of all joints of the human body and characteristic points of the surface of the human body under the first coordinate system;
7) And through a human body positioning converter, the pose of the motion capture device in the second coordinate system is utilized to downwards convert the human body space pose from the first coordinate system into the second coordinate system.
Further, one or more marking points or marking patterns with known space geometric shapes and sizes are relatively and fixedly arranged on the human body in the image, and the marking points or marking patterns are used for solving or assisting in solving the 3D coordinates of each joint of the human body under the first coordinate system.
Furthermore, the human body positioner adopts a stereoscopic vision principle to reconstruct three-dimensionally the marking points on the surface of the human body through more than two image acquisition sensors at the same moment, so as to obtain the spatial positions of the marking points.
Further, the human body positioning calculation can obtain the 3D coordinates of each joint of the human body under the first coordinate system by using the marking point positioning result obtained by the human body marking point positioner and the human body 3D model and the space posture of each joint obtained by the human body joint point position optimizer.
Further, the human body joint point position optimizer obtains the space posture and the 3D position of each joint point of the optimized human body under the first coordinate system by utilizing the human body joint point tracking results and the self-positioning calculation results of different visual angles acquired by the at least two image acquisition sensors.
Further, the human body joint point position optimizer acquires 3D coordinates of each joint of the human body under the first coordinate system by using the marking point positioning result obtained by the human body marking point positioner.
Further, the human body target tracking is performed to the 2D joint characteristic point set and the 2D mark characteristic point set obtained by human body target detection, tracking of different visual angles is performed to obtain the same-name identification points at the same moment under different visual angles.
Further, the method comprises the steps of extracting a natural texture feature 2D point set in a scene on an image, and performing self-positioning calculation by adopting an SFM principle.
Further, one or more marking points or marking patterns are fixedly arranged in the scene on the image, and characteristic information is provided for the self-positioning calculation.
Further, the 2D characteristic interference in the human body region is filtered out by utilizing the human body region segmentation result obtained by the human body target detection, and background effective characteristic information is provided for the self-positioning calculation.
Further, one or more inertial sensors in the motion capture device are adopted to provide pose information, so that self-positioning calculation is realized.
Further, self-positioning calculation is achieved by registering 3D features to the second coordinate system using one or more depth image sensors on the motion capture device.
The invention has the beneficial effects that the positioning and the motion gesture resolving of the human body in the ground coordinate system can be accurately realized by using two optical cameras, the accuracy of the motion capturing result can be ensured under the condition that the indoor and outdoor shooting scenes are limited, and the practicability, convenience, accuracy, stability and applicability of the motion capturing technology are greatly improved.
Drawings
FIG. 1 is a block diagram of a motion capture system of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
Referring to FIG. 1, a block diagram of a motion capture system of the present invention is shown at 10.
The system comprises a motion capture device 12 comprising two image acquisition sensor apparatuses 13 and 14. The image acquisition sensors 13 and 14 synchronously acquire and transmit image sets (i.e., frames) 15 and 16 of the scene under test into the human target detector 20. The two image sets 15 and 16 each have their own center of projection and most of the time have a common observed person object. The relevant information contained in the image is generated from the image acquisition sensor projection image reflected at the surface of the person and joint position features and joint marking features used to calculate the relative position between the image acquisition sensor devices and the spatial position of the human joints. Because all images in a given frame are captured simultaneously and include joint position features, joint marker features, synchronization of the joint position features, joint marker features is inherent.
The joint marking feature is fixed to a human joint such that the character can move within a space relative to the object coordinate system while the joint marking feature remains stationary on the human body. This allows the person to move in space while its surface is scanned by the image acquisition sensor device.
The human body target detector 20 includes a human body joint point detector 21 and a human body marker point detector 22, and extracts human body joint position features and joint marker feature points from each frame image. For each frame of image, a 2D joint feature point set 23 and a 2D joint marker point set 24 are output. These points and features are identified in the image based on their inherent characteristics. The human joint position feature is a pixel expression of joints of a human body, is a description of physical features of human joint gestures in an image space, and can detect 2D positions of human joint feature points in the image by using a 2D human gesture estimation method (see Cao Z, hidalgo G, simon T, et al OpenPose: real time multi-person 2D pose estimation using PartAffinity Fields[J ]. IEEE transactions on pattern analysis and machine intelligence,2019,43 (1): 172-186.). The human joint marking feature is a marking pattern with a known fixed size fixed near a human joint part, is auxiliary information used for marking joint positions and postures in a motion capture technology, and can acquire 2D positions of human joint marking feature points in an image by using a target detection and edge extraction method based on image geometric features and gray features.
The human body target tracker 30 includes a human body joint point tracker 31 and a human body mark point tracker 32, and respectively tracks the 2D joint feature point set 23 and the 2D joint mark point set 24 to obtain a 2D joint feature information set 33 and a 2D joint mark information set 34 of the same human body at different viewing angles and different moments. According to the continuity of human motion time sequence and the uniqueness of space, the time domain and the space domain tracking are carried out on the 2D points at each moment in the 2D joint characteristic point set 23, a nearest neighbor method can be used for the time domain tracking, and the space domain tracking can determine all 2D joint characteristic information sets 33 of the same human body at different visual angles and all moments based on the epipolar constraint principle. The 2D joint mark point set is highly correlated with the 2D joint feature point set position, and the distance evaluation criterion or the relative relationship between the point and the plane and the preset mark point arrangement rule can be used to determine the human joint and intra-joint tracking result 34 of each 2D joint mark point by the 2D joint feature information set 33 and the epipolar constraint principle.
The human motion pose calculator 40 obtains a human body region of each frame image according to the 2D joint feature information set 33, and performs regression calculation on the human body region part in the image and the 2D joint feature information set 33 through the end-to-end neural network to obtain a human body 3D model result 41, namely, a spatial pose angle (θ), a body shape (β) and a human body 3D model of each joint of the human body.
The body position calculator 50 includes a body mark point positioner 51, a personA body joint point position optimizer 52 and a body joint point positioner 53. The human body marking point positioner 51 calculates a 3D joint marking point result 54 of each 2D point in the 2D joint marking point information set 34 under the action device coordinate system according to the result 61 obtained by the self-positioning calculator 60 by using the binocular reconstruction principle. The human body joint point position optimizer 52 performs multi-view optimization on the human body 3D model result 41 obtained by the human body motion posture calculator according to the image acquisition sensor parameter result 61 in the motion capture device obtained by the self-positioning calculator 60 by adopting a stereoscopic vision triangulation principle and a epipolar constraint principle, so as to obtain an accurate human body 3D model result 55 (each joint space posture angle (θ) and a human body 3D model), wherein an optimization objective function is E (β, θ) =e J (β,θ;K,J op )+λ θ E θ (θ)+λ α E α (θ)+λ β E β (θ) selecting appropriate weights (λ) for optimizing the objective function for each joint portion θ ,λ α ,λ β ). The human body joint point positioner 53 optimizes the human body position and the dimension of the human body 3D model result 55 according to the human body 3D joint mark point result 54 and transforms the human body position and the dimension into each joint 3D coordinate 56 under the motion capture device coordinate system, or directly performs joint position calibration on the human body 3D joint mark point result 54 to obtain each joint 3D coordinate 57 of the human body represented by the 3D joint mark point under the motion capture device coordinate system.
The self-positioning calculator 60 obtains a pose relation 61 between image acquisition sensors in the motion capture device and a pose 62 (i.e. a 3D coordinate and a pose angle) of the motion capture device in a ground coordinate system according to natural texture feature points or artificial mark points fixed relative to the ground in the image background acquired by the image acquisition sensors. And calculating the pose relation 61 between the image acquisition sensors in the motion capture device according to the calibration plate double-target calibration method. According to the human body region segmentation result obtained by the human body target detector 20, 2D mark characteristic interference of the human body region part is filtered, and natural texture characteristics fixed relative to the ground in the background are extracted to calculate the pose of the motion capture device under the ground coordinate system. The specific principle is as follows:
extracting image features of non-human body areas in images through feature extraction algorithms such as SIFT, calculating matching feature points of single-frame image pairs by using a kdtree model, calculating a epipolar geometry estimation F matrix for each single-frame image pair, and optimizing and improving the matching pairs through a random sampling consistency algorithm (RANSAC), so that natural texture features fixed relative to the ground in the background can be transmitted in a chain mode in the matching pairs. The pose of the motion capture device in the ground coordinate system is then calculated according to the Bundle Adjustment (BA) method.
BA calculation is divided into three processes: pure motion BA by minimizing background natural texture feature points x i And 3D point X under matched ground coordinate system i The re-projection error of the image acquisition sensor in the motion capture device is optimized, and the rotation matrix R and the position t of the image acquisition sensor in the motion capture device are optimized; local BA: according to local key frame K L Background natural texture feature point P common to image pairs L Optimizing a rotation matrix R and a position t of an image acquisition sensor in the motion capture device in a local visual field range; and (3) the global BA optimizes the rotation matrix R and the position t of the image acquisition sensor in the motion capture device according to the background natural texture feature points of all the image pairs, and finally obtains the pose 62 of the motion capture device under the ground coordinate system.
The human body positioning converter 70 converts the human body joint 3D results 56 and 57 in the motion capture device coordinate system into joint 3D results 71 and 72 in the ground coordinate system by 3D coordinate system conversion.
As shown in the block diagrams, discrete groups of components that communicate with each other through different digital signal connections are readily understood by one of ordinary skill in the art. Because some of the components perform functions by functions or operations of a given hardware or software system, and many of the data communication channels shown perform functions by data communication systems in a computer operating system or application program, the preferred embodiment consists of a combination of hardware and software components. Accordingly, the illustrated structure is provided to effectively illustrate the present preferred embodiment.
One of ordinary skill in the art will appreciate the marker feature points that are placed on the surface of the human body for scanning, as well as other ways to provide the target to be detected by the sensor device.
It will be appreciated that numerous modifications are possible to those skilled in the art based on the present system. Accordingly, the foregoing description and related drawings are illustrative of the invention only and are not limiting. It should also be understood that the present invention covers any improvement, use and adjustment based on the present invention. Generally, the principles of the present invention and other improved systems based on the embodiments disclosed herein and known or conventional techniques or other systems to which the essential features of the foregoing systems may be applied are protected by the following claims.

Claims (19)

1. A motion capture system, comprising:
a motion capture device comprising at least two image acquisition sensors, the motion capture device providing images through the image acquisition sensors;
the human body target detector comprises at least one human body joint point detector and at least one human body mark point detector, wherein the human body joint point detector is used for detecting a human body in an image acquired by the motion capture equipment to obtain a 2D joint characteristic point set of the human body on the image, and the human body mark point detector is used for detecting a person in the image acquired by the motion capture equipment to obtain a 2D mark characteristic point set of the surface of the person on the image;
the human body target tracker comprises at least one human body joint point tracker and at least one human body mark point tracker, wherein the human body joint point tracker is used for tracking the 2D joint feature point set detected by the human body joint point detector to obtain 2D joint feature point set information of different moments of a human body, and the human body mark point tracker is used for tracking the 2D mark feature point set detected by the human body mark point detector to obtain 2D mark feature point set information of each joint of the human body at different moments;
the human body motion gesture calculator is used for performing motion gesture calculation on the human body detected by the human body target detector to obtain a space gesture angle and a 3D coordinate of each joint of the human body;
the human body positioning calculator comprises at least one human body mark point positioner, at least one human body joint point position optimizer and at least one human body joint point positioner, wherein the human body mark point positioner is used for calculating 3D coordinates of all mark characteristic points of a human body under a first coordinate system, the human body joint point position optimizer is used for optimizing the spatial pose of the human body joint point calculated by the human body action pose calculator, and the human body joint point positioner is used for calculating 3D coordinates of all joints of the human body under the first coordinate system;
the self-positioning calculator is used for positioning calculation of the motion capture device, namely calculating a 3D coordinate and an attitude angle of the motion capture device under a second coordinate system;
and the human body positioning converter is used for down-converting the human body space pose from the first coordinate system into the second coordinate system.
2. The system of claim 1, wherein the motion capture device is a portable device that is easy to install and remove.
3. The system of claim 1, further comprising a wireless communication device for exchanging data.
4. The system of claim 1, wherein the motion capture device comprises at least one light source for illuminating a scene, an environment, or a motion capture object.
5. The system of claim 1, wherein the motion capture device comprises at least one electronic chip processor that performs all or a subset of the following functions: a human body target detector, a human body target tracker, a human body action gesture calculator, a human body positioning calculator, a self-positioning calculator and a human body positioning converter.
6. The system of claim 1, wherein the motion capture device includes at least one inertial sensor that provides pose information for assisting the motion capture device in self-positioning.
7. The system of claim 1, wherein the motion capture device comprises at least one depth image sensor that provides pose information for assisting the motion capture device in self-positioning.
8. The motion capturing method is characterized by comprising the following steps:
1) Acquiring image sequences acquired by at least two image acquisition sensors through the motion capture device;
2) Performing human body detection and human body mark point detection on each frame of image acquired by the image acquisition sensor through a human body target detector to obtain region segmentation, a 2D joint feature point set and a 2D mark feature point set of a human body on the image;
3) Tracking the 2D joint characteristic point set and the 2D mark characteristic point set obtained by detecting the human body target through a human body target tracker respectively to obtain the 2D joint characteristic point set of the same human body at all times of different visual angles and the 2D mark characteristic point set of the joint part of the same human body at all times of different visual angles;
4) Carrying out regression calculation on the human body 3D model and the action gesture by using a single frame image through a human body action gesture solver to obtain a human body model and a space gesture angle of each joint under a first coordinate system;
5) The self-positioning calculator is used for performing space self-positioning calculation by utilizing multi-frame images acquired by the acquisition image sensor under different space viewing angles to acquire the pose of the motion capture device in a second coordinate system, wherein the pose comprises a 3D coordinate and a pose angle;
6) Performing space positioning calculation on the detected person through a human body positioning calculator to obtain 3D coordinates of all joints of the human body and characteristic points of the surface of the human body under the first coordinate system;
7) And through a human body positioning converter, the pose of the motion capture device in the second coordinate system is utilized to downwards convert the human body space pose from the first coordinate system into the second coordinate system.
9. The method according to claim 8, wherein one or several marker points or marker patterns with known spatial geometry and dimensions are arranged on the human body in a relatively fixed manner in the image for obtaining or assisting in obtaining 3D coordinates of the joints of the human body in the first coordinate system.
10. The method of claim 8, wherein the human body positioner uses the stereoscopic principle to reconstruct three-dimensionally the marking points on the surface of the human body through more than two image acquisition sensors at the same time to obtain the spatial positions of the marking points.
11. The method according to claim 8, wherein the human body positioning calculation obtains 3D coordinates of each joint of the human body in the first coordinate system by using the marking point positioning result obtained by the human body marking point positioner and the human body 3D model and the spatial pose of each joint obtained by the human body joint point position optimizer.
12. The method of claim 11, wherein the human body joint position optimizer obtains the optimized spatial pose and 3D position of each joint point of the human body in the first coordinate system using human body joint tracking results and self-positioning calculation results of different perspectives acquired by the at least two image acquisition sensors.
13. The method of claim 11, wherein the human body joint point position optimizer obtains 3D coordinates of each joint of the human body in the first coordinate system using the marker point positioning results obtained by the human body marker point positioner.
14. The method according to claim 8, wherein the human body target tracking is performed to track the 2D joint feature point set and the 2D mark feature point set obtained by detecting the human body target at different view angles, so as to obtain the same name identification points at the same moment under different view angles.
15. The method of claim 8, further comprising extracting a set of natural texture features 2D points in the scene on the image, the self-localization calculation using SFM principles.
16. The method according to claim 15, wherein one or several marker points or marker patterns are fixedly arranged in the scene on the image, providing characteristic information for the self-localization calculation.
17. The method according to claim 8 or 14, wherein the 2D feature disturbance in the human body region is filtered out by using the human body region segmentation result obtained by the human body target detection, and background valid feature information is provided for the self-positioning calculation.
18. The method of claim 8, wherein the self-positioning calculation is implemented using one or more inertial sensors in the motion capture device to provide pose information.
19. The method of claim 8, wherein self-localization calculation is achieved by 3D feature registration to the second coordinate system using one or more depth image sensors on the motion capture device.
CN202110786864.0A 2021-07-12 2021-07-12 Motion capturing system and method Active CN113421286B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110786864.0A CN113421286B (en) 2021-07-12 2021-07-12 Motion capturing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110786864.0A CN113421286B (en) 2021-07-12 2021-07-12 Motion capturing system and method

Publications (2)

Publication Number Publication Date
CN113421286A CN113421286A (en) 2021-09-21
CN113421286B true CN113421286B (en) 2024-01-02

Family

ID=77720772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110786864.0A Active CN113421286B (en) 2021-07-12 2021-07-12 Motion capturing system and method

Country Status (1)

Country Link
CN (1) CN113421286B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114283447B (en) * 2021-12-13 2024-03-26 北京元客方舟科技有限公司 Motion capturing system and method

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101982836A (en) * 2010-10-14 2011-03-02 西北工业大学 Mark point identification initializing method based on principal components analysis (PCA) in motion capture system
CN102855470A (en) * 2012-07-31 2013-01-02 中国科学院自动化研究所 Estimation method of human posture based on depth image
CN103198492A (en) * 2013-03-28 2013-07-10 沈阳航空航天大学 Human motion capture method
CN104700433A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Vision-based real-time general movement capturing method and system for human body
CN105631861A (en) * 2015-12-21 2016-06-01 浙江大学 Method of restoring three-dimensional human body posture from unmarked monocular image in combination with height map
CN108376405A (en) * 2018-02-22 2018-08-07 国家体育总局体育科学研究所 Human movement capture system and method for catching based on binary sense tracing system
CN108564642A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Unmarked performance based on UE engines captures system
CN109000582A (en) * 2018-03-15 2018-12-14 杭州思看科技有限公司 Scan method and system, storage medium, the equipment of tracking mode three-dimensional scanner
CN110544302A (en) * 2019-09-06 2019-12-06 广东工业大学 Human body action reconstruction system and method based on multi-view vision and action training system
CN110633005A (en) * 2019-04-02 2019-12-31 北京理工大学 Optical unmarked three-dimensional human body motion capture method
CN111489392A (en) * 2020-03-30 2020-08-04 清华大学 Single target human motion posture capturing method and system in multi-person environment
CN111932678A (en) * 2020-08-13 2020-11-13 北京未澜科技有限公司 Multi-view real-time human motion, gesture, expression and texture reconstruction system
CN212729816U (en) * 2019-12-12 2021-03-19 中国科学院深圳先进技术研究院 Human motion capture system
CN112907631A (en) * 2021-02-20 2021-06-04 北京未澜科技有限公司 Multi-RGB camera real-time human body motion capture system introducing feedback mechanism
CN113033369A (en) * 2021-03-18 2021-06-25 北京达佳互联信息技术有限公司 Motion capture method, motion capture device, electronic equipment and computer-readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8023726B2 (en) * 2006-11-10 2011-09-20 University Of Maryland Method and system for markerless motion capture using multiple cameras
US8755569B2 (en) * 2009-05-29 2014-06-17 University Of Central Florida Research Foundation, Inc. Methods for recognizing pose and action of articulated objects with collection of planes in motion

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101982836A (en) * 2010-10-14 2011-03-02 西北工业大学 Mark point identification initializing method based on principal components analysis (PCA) in motion capture system
CN102855470A (en) * 2012-07-31 2013-01-02 中国科学院自动化研究所 Estimation method of human posture based on depth image
CN103198492A (en) * 2013-03-28 2013-07-10 沈阳航空航天大学 Human motion capture method
CN104700433A (en) * 2015-03-24 2015-06-10 中国人民解放军国防科学技术大学 Vision-based real-time general movement capturing method and system for human body
CN105631861A (en) * 2015-12-21 2016-06-01 浙江大学 Method of restoring three-dimensional human body posture from unmarked monocular image in combination with height map
CN108376405A (en) * 2018-02-22 2018-08-07 国家体育总局体育科学研究所 Human movement capture system and method for catching based on binary sense tracing system
CN109000582A (en) * 2018-03-15 2018-12-14 杭州思看科技有限公司 Scan method and system, storage medium, the equipment of tracking mode three-dimensional scanner
CN108564642A (en) * 2018-03-16 2018-09-21 中国科学院自动化研究所 Unmarked performance based on UE engines captures system
CN110633005A (en) * 2019-04-02 2019-12-31 北京理工大学 Optical unmarked three-dimensional human body motion capture method
CN110544302A (en) * 2019-09-06 2019-12-06 广东工业大学 Human body action reconstruction system and method based on multi-view vision and action training system
CN212729816U (en) * 2019-12-12 2021-03-19 中国科学院深圳先进技术研究院 Human motion capture system
CN111489392A (en) * 2020-03-30 2020-08-04 清华大学 Single target human motion posture capturing method and system in multi-person environment
CN111932678A (en) * 2020-08-13 2020-11-13 北京未澜科技有限公司 Multi-view real-time human motion, gesture, expression and texture reconstruction system
CN112907631A (en) * 2021-02-20 2021-06-04 北京未澜科技有限公司 Multi-RGB camera real-time human body motion capture system introducing feedback mechanism
CN113033369A (en) * 2021-03-18 2021-06-25 北京达佳互联信息技术有限公司 Motion capture method, motion capture device, electronic equipment and computer-readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Human Motion Capture Algorithm Based on Inertial Sensors;Pengzhan Chen et al.;《Journal of Sensors》;全文 *
基于无线惯性传感器的人体动作捕捉系统;张洪超 等;《电脑知识与技术》;全文 *
基于视频和三维动作捕捉数据的人体动作识别方法的研究;赵琼;《CNKI》;全文 *

Also Published As

Publication number Publication date
CN113421286A (en) 2021-09-21

Similar Documents

Publication Publication Date Title
US9235753B2 (en) Extraction of skeletons from 3D maps
US8848035B2 (en) Device for generating three dimensional surface models of moving objects
CN102135236B (en) Automatic non-destructive testing method for internal wall of binocular vision pipeline
CN111881887A (en) Multi-camera-based motion attitude monitoring and guiding method and device
CN109059895A (en) A kind of multi-modal indoor ranging and localization method based on mobile phone camera and sensor
CN107167073A (en) A kind of three-dimensional rapid measurement device of linear array structure light and its measuring method
CN106996748A (en) A kind of wheel footpath measuring method based on binocular vision
CN110084243A (en) It is a kind of based on the archives of two dimensional code and monocular camera identification and localization method
CN108209926A (en) Human Height measuring system based on depth image
KR101126626B1 (en) System and Method for large-scale 3D reconstruction
CN107729893A (en) A kind of vision positioning method of clapper die spotting press, system and storage medium
CN110146030A (en) Side slope surface DEFORMATION MONITORING SYSTEM and method based on gridiron pattern notation
CN109373912A (en) A kind of non-contact six-freedom displacement measurement method based on binocular vision
CN109341668A (en) Polyphaser measurement method based on refraction projection model and beam ray tracing method
CN110675453A (en) Self-positioning method for moving target in known scene
Wong et al. Fast acquisition of dense depth data by a new structured light scheme
CN109613974A (en) A kind of AR household experiential method under large scene
CN113421286B (en) Motion capturing system and method
CN113487674B (en) Human body pose estimation system and method
CN112253913A (en) Intelligent visual 3D information acquisition equipment deviating from rotation center
Su et al. Obtaining obstacle information by an omnidirectional stereo vision system
CN113487726A (en) Motion capture system and method
CN111860275B (en) Gesture recognition data acquisition system and method
Kollmitzer Object detection and measurement using stereo images
CN114548224A (en) 2D human body pose generation method and device for strong interaction human body motion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant