CN114299118B - Optical motion capture mark point real-time complementation method based on time reversal symmetry - Google Patents

Optical motion capture mark point real-time complementation method based on time reversal symmetry Download PDF

Info

Publication number
CN114299118B
CN114299118B CN202111659975.1A CN202111659975A CN114299118B CN 114299118 B CN114299118 B CN 114299118B CN 202111659975 A CN202111659975 A CN 202111659975A CN 114299118 B CN114299118 B CN 114299118B
Authority
CN
China
Prior art keywords
time
coordinate system
root node
real
marking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111659975.1A
Other languages
Chinese (zh)
Other versions
CN114299118A (en
Inventor
翁冬冬
王怡晗
李冬
郭署山
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202111659975.1A priority Critical patent/CN114299118B/en
Publication of CN114299118A publication Critical patent/CN114299118A/en
Application granted granted Critical
Publication of CN114299118B publication Critical patent/CN114299118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a real-time complementation method of an optical motion capturing marking point based on time reversal symmetry, which aims to firstly use the time reversal symmetry for marking point complementation, and shows that the sequence features extracted along the two directions of time are meaningful for the complementation of lost information according to the symmetry characteristics of motion in forward and reverse flows along the time; compared with the off-line mode in the prior art, the method can realize real-time completion of the mark points, can solve the problem that the data is difficult to process for a long time in the prior art, and has higher completion precision.

Description

Optical motion capture mark point real-time complementation method based on time reversal symmetry
Technical Field
The invention belongs to the technical field of optical motion capture, and particularly relates to a real-time complementation method of an optical motion capture mark point based on time reversal symmetry.
Background
The optical motion capture based on the mark points is a key technology in the processes of motion acquisition, analysis, mapping and the like, the technology collects the image coordinates of the mark points by equipment such as cameras in the space, reconstructs the space positions of the mark points, and calculates the positions and the orientations of the bones of the human body at the current moment according to the three-dimensional position coordinates of the mark point cloud. The space environment and the motion of the human body can easily shade the mark points, and the position and the posture of the reconstruction of the mark points can fail due to the loss of the mark points, so that the problem of how to complement the mark points when the mark points are lost is the problem to be solved by the optical motion capturing technology based on the mark points.
Vicon is one of the most commonly used optical motion capture systems in current academic research. When a mark point is lost, matched software Nexus of the Vicon system provides a plurality of interpolation complement methods, mainly including cubic or quintic spline interpolation (SPLINE FILL) based on a track, track interpolation (PATTERN FILL) based on one relevant mark and track interpolation (Rigid Body Fill) based on three or more relevant marks, and the methods can reconstruct a smooth motion track.
The Vicon system can only obtain a section of dynamic capture data and then complement mark points offline, cannot complete in real time, and can process 60 frames to be lost continuously at most, and cannot complete after long-time loss.
According to the scheme, kalman filtering is used, the speed is calculated by using a historical frame according to constant-speed sampling, and then the position of a lost mark point of the current frame is predicted according to the historical speed.
Disclosure of Invention
In view of the above, the invention aims to provide a real-time completion method for an optical motion capture mark point based on time reversal symmetry, which can solve the problem that the data loss for a long time is difficult to process in the prior art, and has higher completion precision.
An optical motion capture mark point real-time complement method based on time reversal symmetry comprises the following steps:
step 1, training data is acquired, specifically:
acquiring three-dimensional position coordinates of each existing marking point in an optical coordinate system as original input data;
converting the original input data from the optical coordinate system to a root node coordinate system; the root node coordinate system takes the geometric center of the marked point of the waist as the origin;
Randomly setting the position coordinates of the marking points in the original input data to be zero so as to simulate the loss of the marking points and obtain training data;
step2, inputting the training data in the step 1 into a network structure for network training;
And 3, converting the coordinates of each marking point into a root node coordinate system for a section of action sequence needing point filling, which is acquired in real time, and inputting the root node coordinate system into the network structure trained in the step 2 to obtain the action sequence with the position coordinates being filled.
Preferably, the establishing process of the root node coordinate system in the step 1 is as follows:
The normalized vectors of the three coordinate axes x, y and z of the root node coordinate system are r x,ry,rz respectively, the straight line and the plane closest to all the marking points on the waist are taken by an optimization method, the initial values of the normal vectors of the direction vector of the straight line and the plane are defined as r x, and r y, respectively, and the z-axis normalized vector is expressed as:
rz=rx,×ry, (1)
Let r x, coincide with the normalized vector r x of coordinate axis z, i.e. r x=rx,, then: r y=rz×rx;
thereby establishing a root node coordinate system.
Preferably, the method for randomly setting the position coordinates of the marked points in the step 1 comprises the following steps:
The position matrix of each marking point in the original input data is represented as P raw∈Rf×N, wherein the number of lines of the position matrix is the number f of frames of the segment of sequence, the number N of columns is consistent with the number N of marking points, and each element stores the coordinates of a root node coordinate system of each marking point;
The random zero setting process comprises the following steps: firstly, generating a random number matrix A with the same size as the position matrix, wherein each element a ij epsilon [0,1] is set to be alpha, if a ij < (1-alpha), a ij =1, otherwise a ij =0, a new matrix A p is obtained, and the training data matrix after random zeroing is as follows:
Pmiss=Ap·Praw
preferably, the network structure is a BiLSTM, biRNN or BiGRU network.
Preferably, the network structure adopts BiLSTM networks, and the Loss function Loss adopts mean square error.
The invention has the following beneficial effects:
The invention provides a real-time completion method of an optical motion capturing marking point based on time reversal symmetry, which aims to firstly use the time reversal symmetry for marking point completion, and shows that sequence features extracted along two directions of time are meaningful for missing information completion according to the symmetry characteristics of motion in forward and reverse flows along the time. Compared with the off-line mode in the prior art, the method can realize real-time completion of the mark points, can solve the problem that the data is difficult to process for a long time in the prior art, and has higher completion precision.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of converting a marker point from an optical coordinate system to a local coordinate system;
fig. 3 is a BiLSTM network architecture used in an embodiment of the present invention.
Detailed Description
The invention will now be described in detail by way of example with reference to the accompanying drawings.
Step 1, data preprocessing
The main purpose of data preprocessing is to convert the original input data from an optical coordinate system to a root node coordinate system, as shown in fig. 2, the four marking points which are positioned at the waist by default are not lost, and the position coordinates of the marking points in the optical coordinate system are as followsThe origin of the root node coordinate system is the geometric center of the four marker points, namely the origin is:
The three coordinate axes of the root node coordinate system are normalized vectors r x,ry,rz, and the straight line and the plane closest to 4 mark points of the waist are taken by an optimization method, so that the direction vector of the straight line and the normal vector of the plane are respectively the initial values r x, and r y, of coordinate axes r x and r y, and the z-axis vector is:
rz=rx,×ry, (1)
Since the initial r x, and r y, are not necessarily orthogonal, let r x, coincide with the coordinate axis r x, i.e. r x=rx,, then:
ry=rz×rx(2)
the three-dimensional position coordinates of the marker point with index number i in the root node coordinate system are as follows:
for training data, it is also necessary to simulate marker point loss by randomly zeroing the marker point location coordinates after converting the marker points to the local coordinate system. Recording a position matrix of a complete marking point in a section of action sequence as P raw∈Rf×N, wherein the number of lines of the position matrix is the number f of frames of the section of action sequence, the number N of columns is consistent with the number N of marking points, and each element stores a root node coordinate system coordinate of each marking point;
The random zero setting process comprises the following steps: firstly, generating a random number matrix A with the same size as the position matrix, wherein each element a ij epsilon [0,1] is set to have a loss rate of alpha, if a ij < (1-alpha), a ij =1, otherwise a ij =0, a new matrix A p is obtained, and the positions of lost mark points are as follows:
Pmiss=Ap·Praw (3)
Step 2, training a network model
The training data of step1 is input into a markable point completion network, as shown in fig. 3, which is composed mainly of BiLSTM modules and fully connected network modules with shared parameters, while BiLSTM is composed of forward and backward propagating LSTM. The hidden layer size of the LSTM is 2048, the hidden layer size of the FC layer depends on the number of marking points, and if the number of marking points is m, the hidden layer size of the FC layer is 3m, and the Loss function Loss is a mean square error.For the position coordinates of the tn-th frame mark point after data preprocessing, F tn is the characteristic of the tn-th frame, P GLM∈Rn×m×3 is the true value of the position coordinates of the n-th frame mark point in the local coordinate system,And complementing the position coordinates of the marking points for the tn th frame.
Step3, network model application
And (3) carrying out coordinate transformation on the data according to the method of the step (1) for a section of action sequence needing point filling, which is acquired in real time, and inputting the data into the network model trained in the step (2) to obtain the action sequence with the position coordinates being filled.
The key point of the invention is that the time inversion characteristic constraint of human body motion is considered for the first time in the process of marking point filling, namely if it is reasonable to predict the current marking point motion state by using the past marking point motion data, it is also reasonable to predict the past marking point motion state by using the current marking point motion data, the time inversion constraint is realized by using the bidirectional long-short-term memory network BiLSTM, and all methods for filling the marking points by using the time inversion characteristic constraint, namely extracting the sequence bidirectional features, are within the protection scope of the invention, including but not limited to using BiLSTM, biRNN, biGRU and the like.
Compared with other methods, the method has high real-time performance, and the processing speed can reach 40 frames/second. The invention can realize the completion of the mark points with the precision within 0.5 cm. The completion marking point function of the Vicon system matched software cannot be used when 60 frames are continuously lost, and the invention can realize high-precision marking point completion when the time of the maintenance marking point is longer than 60 frames.
In summary, the above embodiments are only preferred embodiments of the present invention, and are not intended to limit the scope of the present invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (5)

1. The real-time completion method for the optical motion capture mark points based on time reversal symmetry is characterized by comprising the following steps of:
step 1, training data is acquired, specifically:
acquiring three-dimensional position coordinates of each existing marking point in an optical coordinate system as original input data;
converting the original input data from the optical coordinate system to a root node coordinate system; the root node coordinate system takes the geometric center of the marked point of the waist as the origin;
Randomly setting the position coordinates of the marking points in the original input data to be zero so as to simulate the loss of the marking points and obtain training data;
step2, inputting the training data in the step 1 into a network structure for network training;
And 3, converting the coordinates of each marking point into a root node coordinate system for a section of action sequence needing point filling, which is acquired in real time, and inputting the root node coordinate system into the network structure trained in the step 2 to obtain the action sequence with the position coordinates being filled.
2. The method for real-time complementation of optical motion capture marker points based on time reversal symmetry according to claim 1, wherein the process of establishing the root node coordinate system in step1 is as follows:
The normalized vectors of the three coordinate axes x, y and z of the root node coordinate system are r x,ry,rz respectively, the straight line and the plane closest to all the marking points on the waist are taken by an optimization method, the initial values of the normal vectors of the direction vector of the straight line and the plane are defined as r x, and r y, respectively, and the z-axis normalized vector is expressed as:
rz=rx,×ry, (1)
Let r x, coincide with the normalized vector r x of coordinate axis z, i.e. r x=rx,, then: r y=rz×rx;
thereby establishing a root node coordinate system.
3. The method for real-time completion of optical motion capture marker points based on time reversal symmetry according to claim 1, wherein the method for randomly zeroing the position coordinates of the marker points in the step 1 is as follows:
The position matrix of each marking point in the original input data is represented as P raw∈Rf×N, wherein the number of lines of the position matrix is the number f of frames of the segment of sequence, the number N of columns is consistent with the number N of marking points, and each element stores the coordinates of a root node coordinate system of each marking point;
The random zero setting process comprises the following steps: firstly, generating a random number matrix A with the same size as the position matrix, wherein each element a ij epsilon [0,1] is set to be alpha, if a ij < (1-alpha), a ij =1, otherwise a ij =0, a new matrix A p is obtained, and the training data matrix after random zeroing is as follows:
Pmiss=Ap·Praw
4. The method for real-time complementation of optical motion capture marker points based on time reversal symmetry according to claim 1, wherein the network structure is a BiLSTM, biRNN or BiGRU network.
5. The method for real-time complementation of optical motion capture marker points based on time reversal symmetry according to claim 4, wherein the network structure adopts BiLSTM networks and the Loss function Loss adopts mean square error.
CN202111659975.1A 2021-12-31 2021-12-31 Optical motion capture mark point real-time complementation method based on time reversal symmetry Active CN114299118B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111659975.1A CN114299118B (en) 2021-12-31 2021-12-31 Optical motion capture mark point real-time complementation method based on time reversal symmetry

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111659975.1A CN114299118B (en) 2021-12-31 2021-12-31 Optical motion capture mark point real-time complementation method based on time reversal symmetry

Publications (2)

Publication Number Publication Date
CN114299118A CN114299118A (en) 2022-04-08
CN114299118B true CN114299118B (en) 2024-09-03

Family

ID=80972944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111659975.1A Active CN114299118B (en) 2021-12-31 2021-12-31 Optical motion capture mark point real-time complementation method based on time reversal symmetry

Country Status (1)

Country Link
CN (1) CN114299118B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424650A (en) * 2013-08-21 2015-03-18 中国人民解放军第二炮兵工程大学 Arm information compensation method in optical profile type human body motion capture
CN105551059A (en) * 2015-12-08 2016-05-04 国网山西省电力公司技能培训中心 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2849150A1 (en) * 2013-09-17 2015-03-18 Thomson Licensing Method for capturing the 3D motion of an object, unmanned aerial vehicle and motion capture system
CN109145788B (en) * 2018-08-08 2020-07-07 北京云舶在线科技有限公司 Video-based attitude data capturing method and system
CN113688683A (en) * 2021-07-23 2021-11-23 网易(杭州)网络有限公司 Optical motion capture data processing method, model training method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104424650A (en) * 2013-08-21 2015-03-18 中国人民解放军第二炮兵工程大学 Arm information compensation method in optical profile type human body motion capture
CN105551059A (en) * 2015-12-08 2016-05-04 国网山西省电力公司技能培训中心 Power transformation simulation human body motion capturing method based on optical and inertial body feeling data fusion

Also Published As

Publication number Publication date
CN114299118A (en) 2022-04-08

Similar Documents

Publication Publication Date Title
CN111462329B (en) Three-dimensional reconstruction method of unmanned aerial vehicle aerial image based on deep learning
WO2019170166A1 (en) Depth camera calibration method and apparatus, electronic device, and storage medium
CN102855470B (en) Estimation method of human posture based on depth image
CN102848389B (en) Realization method for mechanical arm calibrating and tracking system based on visual motion capture
CN103913131B (en) Free curve method vector measurement method based on binocular vision
WO2019029099A1 (en) Image gradient combined optimization-based binocular visual sense mileage calculating method
CN112233179B (en) Visual odometer measuring method
CN107291879A (en) The method for visualizing of three-dimensional environment map in a kind of virtual reality system
CN108416428B (en) Robot vision positioning method based on convolutional neural network
CN108154550A (en) Face real-time three-dimensional method for reconstructing based on RGBD cameras
CN103323027B (en) Star point reconstruction-based star sensor dynamic-compensation method
CN102221884B (en) Visual tele-existence device based on real-time calibration of camera and working method thereof
CN111062326A (en) Self-supervision human body 3D posture estimation network training method based on geometric drive
CN115147545A (en) Scene three-dimensional intelligent reconstruction system and method based on BIM and deep learning
CN112102403B (en) High-precision positioning method and system for autonomous inspection unmanned aerial vehicle in power transmission tower scene
CN107862733A (en) Large scale scene real-time three-dimensional method for reconstructing and system based on sight more new algorithm
WO2024094227A1 (en) Gesture pose estimation method based on kalman filtering and deep learning
CN110458944A (en) A kind of human skeleton method for reconstructing based on the fusion of double-visual angle Kinect artis
CN111914618A (en) Three-dimensional human body posture estimation method based on countermeasure type relative depth constraint network
CN106643735A (en) Indoor positioning method and device and mobile terminal
CN114494427B (en) Method, system and terminal for detecting illegal behaviors of person with suspension arm going off station
CN114299118B (en) Optical motion capture mark point real-time complementation method based on time reversal symmetry
CN118276061A (en) External parameter joint calibration method between color camera, laser radar and rotating shaft
CN114529703A (en) Entropy increase optimization-based point cloud global matching method for large complex components
CN114049401A (en) Binocular camera calibration method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant