CN1766831A - A kind of skeleton motion extraction method of the motion capture data based on optics - Google Patents

A kind of skeleton motion extraction method of the motion capture data based on optics Download PDF

Info

Publication number
CN1766831A
CN1766831A CN 200510053595 CN200510053595A CN1766831A CN 1766831 A CN1766831 A CN 1766831A CN 200510053595 CN200510053595 CN 200510053595 CN 200510053595 A CN200510053595 A CN 200510053595A CN 1766831 A CN1766831 A CN 1766831A
Authority
CN
China
Prior art keywords
bone
motion
frame
virtual
distance
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 200510053595
Other languages
Chinese (zh)
Other versions
CN100361070C (en
Inventor
王兆其
文高进
朱登明
夏时洪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
YANTAI HUITONG NETWORK TECHNOLOGY Co Ltd
Original Assignee
Institute of Computing Technology of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Computing Technology of CAS filed Critical Institute of Computing Technology of CAS
Priority to CNB2005100535958A priority Critical patent/CN100361070C/en
Publication of CN1766831A publication Critical patent/CN1766831A/en
Application granted granted Critical
Publication of CN100361070C publication Critical patent/CN100361070C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The present invention discloses a kind of skeleton motion extraction method of the motion capture data based on optics, contains following steps: by monumented point location estimation articulation center position, construct and catch the skeletal system of the coarse coupling of object; On skeletal system, set up the bone local coordinate system; Calculate initial skeleton motion data by the bone local coordinate, set up skeletal system and virtual signage dot system with capturing movement object coupling; On skeletal system, divide skeletal chain, set up the distance function of true and virtual mark point, and carry out the minimum optimization of distance frame by frame, bring in constant renewal in the virtual signage point coordinate by chain, behind virtual mark point and true monumented point stable distance, resulting skeleton motion data is exactly the result.Adopt method of the present invention can extract with based on the accurate skeleton motion of coupling of the motion capture data of optics; Reduction is to the accuracy requirement of position that monumented point pastes, and do not influence the degree of accuracy of motion capture data; Be not limited to a certain specific sign point pasting method.

Description

Bone motion extraction method based on optical motion capture data
Technical Field
The invention mainly relates to a bone motion extraction method of motion capture data, in particular to a bone motion extraction method of motion capture data based on optics.
Background
Motion capture technology is a technology that can directly capture the motion of a moving object, digitally represent the motion, and process the resulting motion data using a computer. For information on motion capture technology, reference is made to reference 1, the chinese patent titled "method for acquiring motion capture data", application No. 00803619.5, which states that motion capture technology is to record the motion of real human or animal in three-dimensional form by using mark points or sensors, and there are four categories of motion capture devices: acoustic, optical, electromagnetic and mechanical; the most practical range and the highest precision is the optical motion capture device.
The optical-based motion capture data refers to motion data captured by an optical capture device, and actually is a digital representation of position information of mark points attached to the human or animal body in three-dimensional space at each moment in the motion process, furthermore, the time of a captured motion can be from several seconds to tens of seconds, and a camera which captures every second jointly takes 30 to 120 three-dimensional photos, each three-dimensional photo represents the three-dimensional space position of all mark points attached to the human or animal body at a certain moment, and is also called an optical-based motion capture data frame. And an optical-based motion capture datum is formed from a series of successive frames of optical-based motion capture data, in effect a series of three-dimensional coordinate values. At present, the capturing technology of the object motion process based on optics is mature, and the three-dimensional space coordinate value of the mark object reaches the sub-millimeter level precision.
However, in the process of human motion simulation by a computer, a human body is driven by a bone system under the skin, the bone system of the human body is composed of joints and bones, and the human body can be driven to make various motions only by inputting the angles of the joints to rotate the corresponding joints to drive the bones. However, the optical-based motion capture data is only a set of continuous three-dimensional coordinate positions, and after being input into the computer, a data processing program, i.e. a bone motion extraction program based on the optical motion capture data, is required to convert the input three-dimensional coordinate values into joint angle values, i.e. bone motion data, so as to drive the human body to perform corresponding actions.
The existing methods for realizing the bone motion extraction program of the motion capture data based on optics basically aim at respective corresponding mark point pasting methods, the position of the pasting is required to be very accurate, and a correct result cannot be obtained when the position of the mark point pasting is slightly deviated; on the other hand, a joint angle solving technique called inverse kinematics is adopted in the processing process, which artificially limits the flexibility of some joints so that they can only rotate around one or two axes, and actually the joints rotate around three axes, so that the bone motions extracted by the bone motion extraction programs can only approximately reflect the captured motion and not accurately represent the captured motion.
In addition, when the computer human motion simulation is applied to sports or military motion simulation, the fidelity requirement of human motion is far higher than that of general human motion simulation, and the captured motion must be reproduced as accurately as possible, which is not achieved by the method used by the current bone motion extraction program based on optical motion capture data.
Disclosure of Invention
The invention aims to overcome the defects that the accurate position of a mark point is difficult to control and the captured data is difficult to obtain a correct result in the existing method, thereby providing a bone motion extraction method based on optical motion captured data, which can extract bone motion accurately matched with the input optical motion captured data.
In order to achieve the above object, the present invention provides a bone motion extraction method based on optical motion capture data, the method comprising the steps of:
inputting the motion capture data based on optics captured by the optical capture device into a computer, estimating the position of the joint center between adjacent bones according to the positions of the mark points, and further constructing a bone system which is roughly matched with a captured object; establishing a bone local coordinate system on a constructed bone system; calculating initial bone motion data according to the bone local coordinates, and establishing a bone system and a virtual mark point system which are matched with the motion capture object; the established skeleton system is divided into skeleton chains, a distance function between a real mark point and a virtual mark point in the skeleton chains is established, the distance function is subjected to minimum distance optimization frame by frame chain by chain, the coordinates of the virtual mark point are continuously updated, and skeleton motion data obtained after the distance between the last virtual mark point and the real mark point is stable are the skeleton motion data based on optical motion capture data.
In the above technical solution, a preferred method further includes filtering the finally obtained bone motion data by using a quaternion linear time invariant filter system to obtain a smooth bone motion.
In the above technical solution, the bone system established to match the motion capture object is established by adding the estimated bone lengths in each frame and averaging to obtain a uniform bone length of each frame.
In the above technical solution, the virtual landmark system matched with the motion capture object is the initial coordinate of the virtual landmark at the coordinate position of each real landmark in the first frame of the motion capture data frame in the corresponding local skeleton system.
In the above technical solution, the skeleton chain for dividing the established skeleton system is divided into: world coordinate system origin-human root, human root-waist-hip-knee-ankle, human root-waist-chest-shoulder-elbow-wrist five types of skeletal chains.
In the above technical solution, the optimizing of the distance function between the real mark point and the virtual mark point is to perform minimum distance optimization chain by chain and frame by using a nonlinear optimization method, and gradually obtain all bone motion data.
The nonlinear optimization method for the distance function between the real mark point and the virtual mark point is a quasi-Newton method adopting BFGS correction.
In the above technical solution, the coordinate of the updated virtual mark point adopts an averaging method, and the averaging method adds up the local coordinate values of a certain actual mark point in each frame and divides the sum by the total frame number to obtain the updated coordinate of the virtual mark point corresponding to the actual mark point.
In the above technical solution, the criterion for determining the stable distance between the virtual mark point and the real mark point is as follows: and on each skeleton chain, performing distance minimum optimization on distance functions of the real mark points and the virtual mark points frame by frame, obtaining a distance function value for each frame, adding the distance function values of each frame after one-time optimization to serve as a suboptimal total function value, comparing the suboptimal total function value with the suboptimal total function value, and if the difference between the two function values is within a set range, considering that the distance between the virtual mark points and the real mark points is stable.
The method of the invention has the advantages that:
1. the method of the present invention can extract skeletal motion that exactly matches the input optical-based motion capture data.
2. The method of the invention not only reduces the accuracy requirement for the position of the marker point paste of the input optical-based motion capture data, but also does not influence the accuracy of the motion capture data.
3. The method of the invention is not only suitable for the capture of human movements but also for the capture of animal movements.
4. The method of the invention is not limited to a special mark point pasting method, thus having good universality.
Drawings
FIG. 1 is a schematic view of a human bone and joint;
FIG. 2 is an initial view of a virtual landmark of a human body;
FIG. 3 is a captured real body landmark view frame;
FIG. 4 is a schematic view frame of a matching result of virtual and real landmark points;
fig. 5 is a process flow diagram of the method of the present invention.
Description of the drawings:
1. the human body comprises a root joint 2, a left hip joint 3, a left knee joint 4, a left ankle joint 5, a left foot 6, a waist joint 7, a chest joint 8, a neck joint 9, a head joint 10, a left clavicle joint 11, a left shoulder joint 12, a left elbow joint 13, a left wrist joint 14 and a left palm.
Detailed Description
The method of the present invention will be described in detail with reference to the accompanying drawings.
Referring to fig. 5, there is shown a flow chart of a preferred embodiment of the method of the present invention.
The invention discloses a bone motion extraction method based on optical motion capture data, which comprises the following steps of:
step 10, obtaining motion capture data based on optics by using the optical motion capture device mentioned in the background art, for example, the capture device VICON 4.5 manufactured by VICON corporation is used in this embodiment, and the bone motion data, that is, the joint angle value of the bone, is obtained by using the obtained data information in this embodiment. The bone motion data can drive the moving object to make corresponding action. Referring to fig. 3, a view of a human landmark point obtained using an optical motion capture device.
Step 20, estimating the position of the joint center between adjacent bones according to the positions of the marker points in the known optical-based motion capture data, thereby constructing a skeletal system that is coarsely matched to the captured object. As shown in fig. 1 and 2, for any motion form, each main joint of the human body comprises: one or more mark points are pasted near the shoulder, elbow, wrist, hip, knee, ankle, neck and head, the specific pasting method of the mark points, namely the number of the mark points and the pasted positions are approximately the same in various pasting methods with only slight differences, and persons skilled in the art can easily know that in the embodiment, the elbow, shoulder, knee and ankle are pasted with one mark point respectively, the wrist joints are pasted with two marks, the head is pasted with four marks, the waist is pasted with four marks or five marks (including two of the neck), the feet are pasted with three marks or one mark, and the palm is pasted with one mark, so that a rough geometric relationship between each joint center and the mark points nearby and the joint centers nearby can be established, and the joint centers can be approximately constructed by utilizing the geometric relationship. For example, the wrist joint is approximately the midpoint of the left mark point and the right mark point attached to the wrist; the shoulder joint is approximately one tenth of the shoulder width of the mark point on the shoulder moving downwards along the connecting line vertical to the two shoulders; the elbow joint center and the approximate shoulder joint center form a right-angled triangle, the elbow joint center is a right-angled vertex, and the distance from the elbow joint center to the elbow joint mark point is measured elbow width data; other joint centers may be similarly constructed. After all joint centers are constructed approximately, a skeletal system that roughly matches the captured object can be initially constructed by connecting the joint centers. Since the positions of the marker points are different in each frame, the length of the same bone in different frames of the bone system preliminarily constructed by the method is variable.
Step 30And establishing a bone local coordinate system according to the constructed bone system, and calculating and generating initial bone motion data according to the local coordinate. Taking a human skeleton system as an example, referring to fig. 1, a connecting line from an elbow joint center to a shoulder joint center can be selected as a Y-axis of an upper arm local coordinate system, a direction perpendicular to a connecting line from the elbow joint center to the shoulder joint center and a connecting line from the elbow joint center to a wrist joint center is selected as an X-axis of an upper arm local coordinate system, then a corresponding Z-axis is constructed according to the xY-axis, and the elbow joint center is selected as a coordinate origin of the upper arm local coordinate system. After obtaining the local coordinates of the bone, initial bone motion data can be calculated. For example: the upper arm and the lower arm are two adjacent skeletons, a local coordinate system P can be constructed on the upper arm, a local coordinate system C can be constructed on the lower arm, the lower arm rotates around the upper arm through an elbow joint to serve as two adjacent coordinate systems P and C, if C rotates around P, P is a parent coordinate system, C is a child coordinate system, and P is a parent coordinate system-1C is a transition rotation matrix from P to C; p-1C can be directly converted into euler angles or quaternions, where the euler angles or quaternions are variables used to represent joint angle values of the bone, and the obtained euler angles or quaternions are the bone motion data. The initial bone motion data obtained here is used for the initial values of the following optimization calculations.
And step 40, calculating the length of each section of bone, and constructing a bone system matched with the captured object. The position of the center of the joint between two adjacent joints has been estimated in step 20 from the position of the landmark points, the length of the bone between two adjacent joints being the average of the distances between the centers of the two adjacent joints in all frames. Taking the calculation of the bone length of the upper arm of the human skeletal system as an example, the three-dimensional coordinate values of the shoulder and elbow joint center of each frame are estimated in the step 20 of calculating the initial bone motion data, the bone length of the upper arm of each frame is the distance between the shoulder and elbow joint center of the frame, and the upper arm length is obtained by dividing the sum of the upper arm lengths of all frames by the total number of frames. In this step, the obtained bone length is consistent in each frame compared to step 20.
And step 50, automatically calculating the coordinate position of each real mark point in the first frame of the motion capture data frame in the corresponding local skeleton system according to the calculated skeleton, and using the coordinate position as the initial coordinate of the virtual mark point, thereby establishing a virtual mark point system matched with the captured object. Referring to fig. 2, the figure is an initial diagram of virtual landmark points of a human body. The real mark point refers to a mark point in the optical-based motion capture data, and the virtual mark point refers to a virtual mark point which is attached to the corresponding bone, changes the position of the virtual mark point in a three-dimensional space along with the motion of the bone, and keeps the coordinate of the local coordinate system of the corresponding bone unchanged. Take a human body as an example; when the current shoulder coordinate system is P, the origin of coordinates, i.e., the center of the shoulder joint, is O, and the coordinates of a real mark point on the shoulder are M, the virtual mark point coordinates corresponding to the real mark point are P (M-O), which is a local coordinate system coordinate value.
And step 60, dividing the tree-shaped skeleton system into a plurality of branch chain structures, and establishing a distance function between a virtual mark point and a real mark point of the chain system. For example: referring to fig. 1, the human skeletal system can be divided into: the origin of the world coordinate system-human root, human root-waist-hip-knee-ankle, human root-waist-chest-head, human root-waist-chest-lock-shoulder-elbow-wrist and other five types of skeleton chains; for example, let a chain have N virtual landmark points and three joints, i is 1, 2, 3 in order from the human root; the coordinate V of the j-th virtual mark point of the skeleton chainJIt is located just above the third joint; rotational angle parameter of each joint (a)i,bi,ci) The rotation angle parameter of each joint is the bone motion data obtained in step 30; the local coordinate system offset of the center of each joint relative to the father joint is PiThe father joint is the previous joint of the joint on the rigid body chain, and the father joint of the human root is the origin (O, O) of the world coordinate system. And the offset P of the joint center except for the human root jointiThat is, the center between the joint and the father joint obtained in step 40And thus is constant. Each group ((a)i,bi,ci),Pi) Can calculate the corresponding 4x4 rotation matrix MiThen the global coordinate of the virtual mark point on the third joint is M1M2M3VJIs denoted by Pj(the virtual mark point is positioned at the position of the joint, the global coordinate of the virtual mark point is calculated, and then the virtual mark point is multiplied by several corresponding parent coordinate system matrixes), and the coordinate of the corresponding real mark point is TjThen, the function of the distance between the N virtual mark points and the real mark point of the chain is: <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>T</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>&CenterDot;</mo> </mrow> </math> in addition, the distance function on the world coordinate system origin-human body root chain is special, because the human body root joint not only has rotation angle parameters (a, b, c) but also has displacement parameters p as (x, y, z), the rotation matrix M of the human body root can be calculated through the rotation angle parameters (a, b, c, p), N virtual mark points are arranged on the human body root, and the local coordinates are respectively marked with VjJ is 1, 2, 3.. times.n, the global coordinate of the virtual marking point is MVjIs denoted by PjAnd the coordinate of the corresponding real mark point is TjSo that the world coordinate system origin-distance function on the human root chain is <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>c</mi> <mo>,</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>T</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>&CenterDot;</mo> </mrow> </math>
And step 70, repeatedly carrying out minimum distance optimization frame by chain by adopting a nonlinear optimization method, continuously updating the coordinates of the virtual mark points in the optimization process until the distance between the virtual mark points and the real mark points is stable, and gradually obtaining all the bone motion data.
The chain-by-chain means that the chain world coordinate system origin-human root is optimized, then the chain human root-waist-hip-knee-ankle is optimized, and then the chain human root-waist-chest, the human root-waist-chest-neck-head, the human root-waist-chest-lock-shoulder-elbow-wrist are optimized in sequence.
The frame-by-frame calculation is performed on all motion capture data frames when optimizing a chain, for example, the first frame is optimized (a)i,bi,ci) Then (a) of the second frame is calculatedi,bi,ci) Then the third frame, in turn until (a) of the last framei,bi,ci). Of all frames (a)i,bi,ci) The data together constitute bone motion data.
The repetition refers to the process of repeating a process when optimizing a certain chain, namely, after all frames of the chain are optimized once to obtain the bone motion data of the chain, the virtual mark point coordinates are recalculated by adopting an averaging method, the new virtual mark point coordinates are substituted into the distance function of the chain, and the chain is optimized again with the minimum distance. The average method here means that the three-dimensional local coordinate values of an actual landmark point of the chain in each frame relative to the new corresponding bone coordinate system under the drive of the new bone motion data are respectively calculated according to the method in step 50, and the updated coordinates of the virtual landmark point corresponding to the actual landmark point are obtained by adding up the local coordinate values of each frame and dividing by the total number of frames.
The term "until the distance between the virtual mark point and the real mark point is stable" as used herein means that a process of iterative optimization, i.e. a cyclic process of optimization, update and re-optimization, is used to optimize each chain, and each optimization is performed on all frames frame by frame, and the corresponding (a) is obtained frame by framei,bi,ci) And (3) a parameter value, each frame has a distance function value, after one optimization, the distance function values of each frame are added to be used as a suboptimal total function value, the suboptimal total function value is compared with the suboptimal total function value, if the difference between the two function values is within +/-0.5, the distance between the virtual mark point and the real mark point is considered to be stable, and the repeated optimization process of the chain is terminated. (a) of each frame resulting from the last optimization in the chaini,bi,ci) And synthesizing data to obtain the skeletal motion data of the chain. The bone motion data of each bone chain are obtained by the same method, and the final result is also obtained.
The optimization method here recommends a quasi-newton method using BFGS correction, since BFGS correction is the best quasi-newton formula so far, has overall convergence, and can be used with low-precision convergence algorithms. The specific implementation details are as follows:
if the distance function between the virtual mark point and the real mark point is as follows:
<math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>T</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math>
in this embodiment, an unconstrained nonlinear optimization is performed to minimize the distance function:
Minimize F(X)
wherein X is (a)1,b1,c1,a2,b2,c2,a3,b3,c3)
Equivalent to solving a system of nonlinear equations:
f (X) 0, wherein <math> <mrow> <mi>f</mi> <mo>=</mo> <mfrac> <mrow> <mo>&PartialD;</mo> <mi>F</mi> </mrow> <mrow> <mo>&PartialD;</mo> <msub> <mi>X</mi> <mi>i</mi> </msub> </mrow> </mfrac> </mrow> </math> i=1,2,3,...9
The method comprises the following specific implementation steps:
(1) selecting an initial value: x when the first frame data is optimized and solved0Taking the corresponding value calculated in step 30, otherwise x0And taking the calculation result of the previous frame. G0Given an allowable error, ε > 0, where I is the identity matrix and ε is taken to be 0.05.
(2) It is checked whether a termination condition is fulfilled. Calculate  f (x)0) If |  f (x)0) | < ε, termination of the iteration, x0To approximate the optimal solution: otherwise, the third step is carried out.
(3) The initial BFGS direction is constructed. Get d0=-G0f(x0) Let k be 0;
(4) One-dimensional search is performed to find lambda0And xk+1Let f (x)kkdk)=min(f(xkkdk))
(5) It is checked whether a termination condition is fulfilled. Calculate  f (x)k) If |  f (x)k) | < ε, termination of the iteration, xkTo approximate the optimal solution: otherwise, the step six is executed.
(6) Constructing a BFGS direction, and using a BFGS iterative formula:
<math> <mrow> <msub> <mi>G</mi> <mrow> <mi>k</mi> <mo>+</mo> <mo>!</mo> </mrow> </msub> <mo>=</mo> <msub> <mi>G</mi> <mi>k</mi> </msub> <mo>+</mo> <mfrac> <mrow> <mi>&Delta;</mi> <msub> <mi>x</mi> <mi>k</mi> </msub> <mi>&Delta;</mi> <msubsup> <mi>x</mi> <mi>k</mi> <mi>T</mi> </msubsup> </mrow> <mrow> <mi>&Delta;</mi> <msubsup> <mi>x</mi> <mi>k</mi> <mi>T</mi> </msubsup> <mi>&Delta;</mi> <msub> <mi>g</mi> <mi>k</mi> </msub> </mrow> </mfrac> <mrow> <mo>(</mo> <mn>1</mn> <mo>+</mo> <mfrac> <mrow> <mi>&Delta;</mi> <msubsup> <mi>g</mi> <mi>k</mi> <mi>T</mi> </msubsup> <msub> <mi>G</mi> <mi>k</mi> </msub> <mi>&Delta;</mi> <msub> <mi>g</mi> <mi>k</mi> </msub> </mrow> <mrow> <mi>&Delta;</mi> <msubsup> <mi>x</mi> <mi>k</mi> <mi>T</mi> </msubsup> <mi>&Delta;</mi> <msub> <mi>g</mi> <mi>k</mi> </msub> </mrow> </mfrac> <mo>)</mo> </mrow> <mo>-</mo> <mfrac> <mn>1</mn> <mrow> <mi>&Delta;</mi> <msubsup> <mi>x</mi> <mi>k</mi> <mi>T</mi> </msubsup> <mi>&Delta;</mi> <msub> <mi>g</mi> <mi>k</mi> </msub> </mrow> </mfrac> <mrow> <mo>(</mo> <mi>&Delta;</mi> <msub> <mi>x</mi> <mi>k</mi> </msub> <mi>&Delta;</mi> <msubsup> <mi>g</mi> <mi>k</mi> <mi>T</mi> </msubsup> <msub> <mi>G</mi> <mi>k</mi> </msub> <mo>+</mo> <msub> <mi>G</mi> <mi>k</mi> </msub> <mi>&Delta;</mi> <msub> <mi>g</mi> <mi>k</mi> </msub> <mi>&Delta;</mi> <msubsup> <mi>x</mi> <mi>k</mi> <mi>T</mi> </msubsup> <mo>)</mo> </mrow> </mrow> </math>
wherein,
Δxk=xk+1-xk
Δgk=f(xk+1)-f(xk),
Δgk=Gk+1-Gk
the above implements the distance function <math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <msub> <mi>a</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>a</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>b</mi> <mn>3</mn> </msub> <mo>,</mo> <msub> <mi>c</mi> <mn>3</mn> </msub> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>T</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> </mrow> </math> For a particular distance function, i.e. the distance function on the world coordinate system origin-human root chain
<math> <mrow> <mi>f</mi> <mrow> <mo>(</mo> <mi>a</mi> <mo>,</mo> <mi>b</mi> <mo>,</mo> <mi>c</mi> <mo>,</mo> <mi>x</mi> <mo>,</mo> <mi>y</mi> <mo>,</mo> <mi>z</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mi>N</mi> </munderover> <msup> <mrow> <mo>(</mo> <msub> <mi>P</mi> <mi>j</mi> </msub> <mo>-</mo> <msub> <mi>T</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> <mn>2</mn> </msup> <mo>,</mo> </mrow> </math> The above-described quasi-newtonian method of BFGS correction is equally applicable.
Taking the human skeleton system and the landmark system as examples, fig. 4 is the final calculation result of a corresponding frame. Through the repeated optimization process of the distance between the virtual mark point and the real mark point, the precisely matched bone motion can be extracted from the input motion capture data based on optics, and the requirement on the accuracy of the position of the mark point paste of the input motion capture data based on optics is lowered.
Step 80, filtering the bone motion data based on a quaternion linear time invariant filtering system to obtain smooth bone motion; the specific filtering process comprises the following steps: the Euler angle data is converted into corresponding quaternion, the quaternion is mapped to a tangent space through logarithm operation, filtering is carried out by a linear time-invariant filter in the tangent space to obtain a smooth curve, then the filtering result is mapped back to the quaternion space through exponential operation, and then the quaternion space is converted into the Euler angle degree. The above-mentioned filtering method is prior art and will not be described in detail herein.
By the above steps of the method described in this embodiment it is fully possible to extract skeletal motion from the optically based motion capture data to accurately reproduce the motion of the captured object.

Claims (9)

1. A method of bone motion extraction from optically-based motion capture data, comprising the steps of:
inputting the motion capture data based on optics captured by the optical capture device into a computer, estimating the position of the joint center between adjacent bones according to the positions of the mark points, and further constructing a bone system which is roughly matched with a captured object; establishing a bone local coordinate system on a constructed bone system; calculating initial bone motion data according to the bone local coordinates, and establishing a bone system and a virtual mark point system which are matched with the motion capture object; the established skeleton system is divided into skeleton chains, a distance function between a real mark point and a virtual mark point in the skeleton chains is established, the distance function is subjected to minimum distance optimization frame by frame chain by chain, the coordinates of the virtual mark point are continuously updated, and skeleton motion data obtained after the distance between the last virtual mark point and the real mark point is stable are the skeleton motion data based on optical motion capture data.
2. The method of claim 1, further comprising filtering the resulting bone motion data using a quaternion linear time invariant filter system to obtain a smooth bone motion.
3. A method as claimed in claim 1 or 2, wherein the skeleton system is established by averaging the estimated skeleton lengths of the frames to obtain a uniform skeleton length of each frame.
4. A method as claimed in claim 1 or 2, wherein the virtual landmark system matched to the motion capture object is the initial coordinates of each real landmark in the first frame of the motion capture data frame in the corresponding local skeletal system.
5. A method for bone motion extraction based on optically captured motion data according to claim 1 or 2, wherein the said method for dividing the established bone system into bone chains is: world coordinate system origin-human root, human root-waist-hip-knee-ankle, human root-waist-chest-shoulder-elbow-wrist five types of skeletal chains.
6. The method as claimed in claim 1 or 2, wherein the optimization of the distance function between the real mark points and the virtual mark points is a chain-by-chain and frame-by-frame distance minimization optimization by a nonlinear optimization method, so as to obtain all bone motion data step by step.
7. The method of claim 6, wherein the non-linear optimization of the distance function between the real marker points and the virtual marker points is a quasi-Newton method using BFGS correction.
8. The method as claimed in claim 1 or 2, wherein the coordinates of the updated virtual landmark points are averaged, and the average method is used to add the local coordinate values of a certain actual landmark point in each frame and divide the added local coordinate values by the total number of frames to obtain the updated coordinates of the virtual landmark point corresponding to the actual landmark point.
9. A method as claimed in claim 1 or 2, wherein the criterion for determining the stability of the distance between the virtual landmark point and the real landmark point is: and on each skeleton chain, performing distance minimum optimization on distance functions of the real mark points and the virtual mark points frame by frame, obtaining a distance function value for each frame, adding the distance function values of each frame after one-time optimization to serve as a suboptimal total function value, comparing the suboptimal total function value with the suboptimal total function value, and if the difference between the two function values is within a set range, considering that the distance between the virtual mark points and the real mark points is stable.
CNB2005100535958A 2004-10-29 2005-03-10 skeleton motion extraction method by means of optical-based motion capture data Expired - Fee Related CN100361070C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2005100535958A CN100361070C (en) 2004-10-29 2005-03-10 skeleton motion extraction method by means of optical-based motion capture data

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN200410086742 2004-10-29
CN200410086742.7 2004-10-29
CNB2005100535958A CN100361070C (en) 2004-10-29 2005-03-10 skeleton motion extraction method by means of optical-based motion capture data

Publications (2)

Publication Number Publication Date
CN1766831A true CN1766831A (en) 2006-05-03
CN100361070C CN100361070C (en) 2008-01-09

Family

ID=36742734

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2005100535958A Expired - Fee Related CN100361070C (en) 2004-10-29 2005-03-10 skeleton motion extraction method by means of optical-based motion capture data

Country Status (1)

Country Link
CN (1) CN100361070C (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101241601B (en) * 2008-02-19 2010-06-02 深圳先进技术研究院 Graphic processing joint center parameter estimation method
CN101241600B (en) * 2008-02-19 2010-09-29 深圳先进技术研究院 Chain-shaped bone matching method in movement capturing technology
CN101930628A (en) * 2010-09-21 2010-12-29 北京大学 Monocular-camera and multiplane mirror catadioptric device-based motion capturing method
CN101782968B (en) * 2010-02-03 2012-10-03 北京航空航天大学 Human skeleton extracting and orientation judging method based on geodetic survey model
CN102819863A (en) * 2012-07-31 2012-12-12 中国科学院计算技术研究所 Method and system for acquiring three-dimensional human body motion in real time on line
CN103268158A (en) * 2013-05-21 2013-08-28 上海速盟信息技术有限公司 Method and device for acquiring gravity sensing data and electronic device
CN103440037A (en) * 2013-08-21 2013-12-11 中国人民解放军第二炮兵工程大学 Real-time interaction virtual human body motion control method based on limited input information
CN106227368A (en) * 2016-08-03 2016-12-14 北京工业大学 A kind of human synovial angle calculation method and device
CN108663026A (en) * 2018-05-21 2018-10-16 湖南科技大学 A kind of vibration measurement method
CN109767482A (en) * 2019-01-09 2019-05-17 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN113393563A (en) * 2021-05-26 2021-09-14 杭州易现先进科技有限公司 Method, system, electronic device and storage medium for automatically labeling key points

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6088042A (en) * 1997-03-31 2000-07-11 Katrix, Inc. Interactive motion data animation system
US6115052A (en) * 1998-02-12 2000-09-05 Mitsubishi Electric Information Technology Center America, Inc. (Ita) System for reconstructing the 3-dimensional motions of a human figure from a monocularly-viewed image sequence
KR100361462B1 (en) * 1999-11-11 2002-11-21 황병익 Method for Acquisition of Motion Capture Data
JP3960536B2 (en) * 2002-08-12 2007-08-15 株式会社国際電気通信基礎技術研究所 Computer-implemented method and computer-executable program for automatically adapting a parametric dynamic model to human actor size for motion capture
KR100507780B1 (en) * 2002-12-20 2005-08-17 한국전자통신연구원 Apparatus and method for high-speed marker-free motion capture

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101241600B (en) * 2008-02-19 2010-09-29 深圳先进技术研究院 Chain-shaped bone matching method in movement capturing technology
CN101241601B (en) * 2008-02-19 2010-06-02 深圳先进技术研究院 Graphic processing joint center parameter estimation method
CN101782968B (en) * 2010-02-03 2012-10-03 北京航空航天大学 Human skeleton extracting and orientation judging method based on geodetic survey model
CN101930628A (en) * 2010-09-21 2010-12-29 北京大学 Monocular-camera and multiplane mirror catadioptric device-based motion capturing method
CN102819863A (en) * 2012-07-31 2012-12-12 中国科学院计算技术研究所 Method and system for acquiring three-dimensional human body motion in real time on line
CN102819863B (en) * 2012-07-31 2015-01-21 中国科学院计算技术研究所 Method and system for acquiring three-dimensional human body motion in real time on line
CN103268158B (en) * 2013-05-21 2017-09-08 上海速盟信息技术有限公司 A kind of method, device and a kind of electronic equipment of simulated gravity sensing data
CN103268158A (en) * 2013-05-21 2013-08-28 上海速盟信息技术有限公司 Method and device for acquiring gravity sensing data and electronic device
CN103440037A (en) * 2013-08-21 2013-12-11 中国人民解放军第二炮兵工程大学 Real-time interaction virtual human body motion control method based on limited input information
CN103440037B (en) * 2013-08-21 2017-02-08 中国人民解放军第二炮兵工程大学 Real-time interaction virtual human body motion control method based on limited input information
CN106227368A (en) * 2016-08-03 2016-12-14 北京工业大学 A kind of human synovial angle calculation method and device
CN106227368B (en) * 2016-08-03 2019-04-30 北京工业大学 A kind of human synovial angle calculation method and device
CN108663026A (en) * 2018-05-21 2018-10-16 湖南科技大学 A kind of vibration measurement method
CN108663026B (en) * 2018-05-21 2020-08-07 湖南科技大学 Vibration measuring method
CN109767482A (en) * 2019-01-09 2019-05-17 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN109767482B (en) * 2019-01-09 2024-01-09 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium
CN113393563A (en) * 2021-05-26 2021-09-14 杭州易现先进科技有限公司 Method, system, electronic device and storage medium for automatically labeling key points

Also Published As

Publication number Publication date
CN100361070C (en) 2008-01-09

Similar Documents

Publication Publication Date Title
CN1766831A (en) A kind of skeleton motion extraction method of the motion capture data based on optics
CN107833271B (en) Skeleton redirection method and device based on Kinect
CN108876815B (en) Skeleton posture calculation method, character virtual model driving method and storage medium
CN114417616A (en) Digital twin modeling method and system for assembly robot teleoperation environment
CN106327571A (en) Three-dimensional face modeling method and three-dimensional face modeling device
CN113362452B (en) Hand posture three-dimensional reconstruction method and device and storage medium
CN110633005A (en) Optical unmarked three-dimensional human body motion capture method
WO2024094227A1 (en) Gesture pose estimation method based on kalman filtering and deep learning
CN117671738B (en) Human body posture recognition system based on artificial intelligence
CN112183316B (en) Athlete human body posture measuring method
CN110135277A (en) A kind of Human bodys&#39; response method based on convolutional neural networks
CN114792326A (en) Surgical navigation point cloud segmentation and registration method based on structured light
CN114550292A (en) High-physical-reality human body motion capture method based on neural motion control
KR20100062320A (en) Generating method of robot motion data using image data and generating apparatus using the same
Zhao et al. An adaptive stair-ascending gait generation approach based on depth camera for lower limb exoskeleton
CN116664622A (en) Visual movement control method and device
Hao et al. Cromosim: A deep learning-based cross-modality inertial measurement simulator
CN111429499A (en) High-precision three-dimensional reconstruction method for hand skeleton based on single depth camera
CN115240272A (en) Video-based attitude data capturing method
CN114469079A (en) Body joint measuring method using LightHouse
CN112435321A (en) Leap Motion hand skeleton Motion data optimization method
CN110837751B (en) Human motion capturing and gait analysis method based on RGBD depth camera
CN113345010A (en) Multi-Kinect system coordinate calibration and conversion method based on improved ICP
KR20150061549A (en) Motion tracking apparatus with hybrid cameras and method there
Zeng et al. An evaluation approach of multi-person movement synchronization level using OpenPose

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
ASS Succession or assignment of patent right

Owner name: YANTAI HUITONG NETWORK TECHNOLOGY CO., LTD.

Free format text: FORMER OWNER: INSTITUTE OF COMPUTING TECHNOLOGY, CHINESE ACADEMY OF SCIENCES

Effective date: 20121225

C41 Transfer of patent application or patent right or utility model
COR Change of bibliographic data

Free format text: CORRECT: ADDRESS; FROM: 100080 HAIDIAN, BEIJING TO: 264003 YANTAI, SHANDONG PROVINCE

TR01 Transfer of patent right

Effective date of registration: 20121225

Address after: 264003 Shandong Province, Yantai city Laishan District Yingchun Street No. 133

Patentee after: YANTAI HUITONG NETWORK TECHNOLOGY CO., LTD.

Address before: 100080 Haidian District, Zhongguancun Academy of Sciences, South Road, No. 6, No.

Patentee before: Institute of Computing Technology, Chinese Academy of Sciences

CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080109

Termination date: 20170310

CF01 Termination of patent right due to non-payment of annual fee