CN102800103A - Unmarked motion capturing method and device based on multi-visual angle depth camera - Google Patents

Unmarked motion capturing method and device based on multi-visual angle depth camera Download PDF

Info

Publication number
CN102800103A
CN102800103A CN2012102078092A CN201210207809A CN102800103A CN 102800103 A CN102800103 A CN 102800103A CN 2012102078092 A CN2012102078092 A CN 2012102078092A CN 201210207809 A CN201210207809 A CN 201210207809A CN 102800103 A CN102800103 A CN 102800103A
Authority
CN
China
Prior art keywords
point
degree
depth camera
visual angles
depth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012102078092A
Other languages
Chinese (zh)
Other versions
CN102800103B (en
Inventor
刘烨斌
叶亘之
戴琼海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201210207809.2A priority Critical patent/CN102800103B/en
Publication of CN102800103A publication Critical patent/CN102800103A/en
Application granted granted Critical
Publication of CN102800103B publication Critical patent/CN102800103B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an unmarked motion capturing method and a device based on a multi-visual angle depth camera. The method comprises the following steps of: calibrating the multi-visual angle depth camera to obtain a calibrating parameter; acquiring a depth map and a colour map through the multi-visual angle depth camera; carrying out three-dimensional space conversion according to the calibrating parameter and the depth map to obtain a point cloud set; matching each three-dimensional point cloud point P in the point cloud set with each surface grid point V on a human body model according to the information in the depth map and the colour map to obtain the matching result; and carrying out motion capturing based on a human body framework driving surface model according to the matching result to obtain the tracking result. By the method of the embodiment, motion capturing can be carried out flexibly and conveniently without using a heavy landmark. The method and device have the advantages of accurate and robust motion parameter, high restoration degree of surface grid model, high algorithm operation speed and low system cost.

Description

Unmarked motion capture method and device based on degree of depth camera from various visual angles
Technical field
The present invention relates to computer vision field, particularly a kind of unmarked motion capture method and device based on degree of depth camera from various visual angles.
Background technology
Human body motion capture is emphasis and the difficult point problem in the computer vision.Accurately the motion-captured algorithm of robust is in aspect extensive application such as film making, television relay, robot controls.Motion-capturedly generally can be divided into underlined and unmarked two class methods.The wherein underlined motion-captured performer of being meant will wear the clothes that have the distinctive mark thing and perform, and algorithm is through the identification to these marks, the real-time action parameter that obtains the performer.This method accuracy is high, and algorithm complex is little, has been widely used in the manufacturing process of family movie.But the close-fitting stage property clothes that the performer must heavy wearing in this method are performed, and this method also is difficult to be applied to the shooting environmental of real scene outside the film studio.In order to solve above-mentioned deficiency, unmarked motion capture technology becomes the focus of this area research in recent years gradually.
Existing unmarked movement capturing technology refers generally to the performing artist and wears daily business suit; In a green curtain film studio, perform; Multicamera system by being located at is wherein gathered its action, by specific algorithm the multi-angle video sequence of gathering is carried out motion tracking.This method has broken through underlined motion capture method must wear and have the deficiency that the gauge point clothes are gathered, but still be unable to do without the use of green curtain, can't be applicable to the shooting of general scene.
Degree of depth camera is the new mode of the perception three-dimensional world that begins gradually in recent years to popularize, and for each point in the environment, degree of depth camera not only can return its colouring information, can also return the vertical range of this point apart from degree of depth camera photocentre plane.This initiative technological invention is that the unmarked movement capturing technology of general scene provides possibility.
Summary of the invention
The present invention is intended to one of solve the problems of the technologies described above at least.For this reason, the objective of the invention is to propose a kind of unmarked motion capture method that need not marking arrangement, follows the trail of the more excellent robust of result based on degree of depth camera from various visual angles.Another object of the present invention is to propose a kind of unmarked motion capture device that need not marking arrangement, follows the trail of the more excellent robust of result based on degree of depth camera from various visual angles.
To achieve these goals, the unmarked motion capture method based on degree of depth camera from various visual angles according to the present invention may further comprise the steps: A. demarcates the degree of depth camera of said various visual angles, obtains calibrating parameters; B. degree of depth camera sampling depth figure and the cromogram through said various visual angles; C. according to said calibrating parameters and said depth map, carry out the three dimensions conversion and obtain a little converging closing; D. according to the information in said depth map and the cromogram, each three-dimensional point cloud point P and each the surface mesh lattice point V on the manikin that said point is converged in closing mate, and obtain matching result; E. according to said matching result, carry out motion-capturedly through optimizing energy function, obtain following the trail of the result.
In one embodiment of the invention, said calibrating parameters comprises confidential reference items matrix K c, rotation matrix Rc and translation vector Tc.
In one embodiment of the invention; Carrying out the three dimensions conversion according to following computing formula obtains said point and converge and close:
Figure BDA00001781042400021
wherein P is that said point converges the three-dimensional point cloud point in closing; (i; J) be pixel in the said depth map, i, j represent the coordinate of said pixel, d (i; J) represent said pixel (i, depth value j).
In one embodiment of the invention, saidly said point is converged each three-dimensional point cloud point P and each the surface mesh lattice point V on the manikin in closing mate further and comprise:
The matching degree flow function S of the traversal ground said three-dimensional point cloud point P of calculating and each surface mesh lattice point V (V P), and chooses conduct of the peaked said V point of said metric function and the successful point of said P point coupling, and wherein, the computing formula of said metric function is: S ( V , P ) = Max ( N ( V ) N ( P ) , σ N ) * e - ( C ( V ) - C ( P ) ) 2 σ C 2 * e - ( X ( V ) - X ( P ) ) 2 σ X 2 , Wherein, the normal direction value of information of N (V), N (P) difference presentation surface net point V and three-dimensional point cloud point P, σ NRepresent two normal direction inner product threshold values, the colouring information value of C (V), C (P) difference presentation surface net point V and three-dimensional point cloud point P, σ cExpression distribution of color normalized threshold, the positional information value of X (V), X (P) difference presentation surface net point V and three-dimensional point cloud point P, σ XExpression range distribution normalized threshold.
In one embodiment of the invention, said manikin meets skeleton drive surfaces rule, promptly satisfies the defined formula of said skeleton drive surfaces rule:
Figure BDA00001781042400023
Wherein V is the surface mesh lattice point, T xV is that the surface mesh lattice point is out of shape T under joint rotation angle X xThe target location that obtains, j is the joint, θ jξ jBe the joint rotation parameter.
In one embodiment of the invention; Said according to said matching result; Carry out motion-captured through optimizing energy function; Obtaining following the trail of the result comprises: according to said matching result; In conjunction with the defined formula of said skeleton drive surfaces rule, find the solution energy function
Figure BDA00001781042400024
and obtain optimized joint rotation angle X, promptly obtain following the trail of the result.
In one embodiment of the invention, also comprise step: said manikin is carried out Laplce's surface deformation, so that said tracking result is more near actual conditions, wherein, the computing formula of said Laplce's surface deformation is:
Figure BDA00001781042400025
Wherein || LV-δ || 2Be the constraint of Laplce's coordinate system surface geometry, || CV-q|| 2Be kinematic constraint, λ is a surface mesh distortion weight.
Unmarked motion capture method according to the embodiment of the invention based on degree of depth camera from various visual angles; Need not mark by heaviness; Can carry out motion-capturedly more flexibly easily, have the accurate robust of kinematic parameter, surface grid model and restore the advantage that degree is high, the algorithm travelling speed is fast, system cost is low.
To achieve these goals, according to the unmarked motion capture method device of the degree of depth camera based on various visual angles of the present invention with the lower part: the degree of depth camera of various visual angles is used for sampling depth figure and cromogram; Demarcating module is used for the degree of depth camera of said various visual angles is demarcated, and obtains calibrating parameters; Point cloud modular converter is used for according to said calibrating parameters and said depth map, carries out the three dimensions conversion to obtain a little converging closing; Matching module is used for the information according to said depth map and cromogram, and each three-dimensional point cloud point P and each the surface mesh lattice point V on the manikin that said point is converged in closing mate, and obtain matching result; Motion-captured module is used for according to said matching result, carries out motion-capturedly through optimizing energy function, obtains following the trail of the result.
In one embodiment of the invention, said calibrating parameters comprises confidential reference items matrix K c, rotation matrix Rc and translation vector Tc.
In one embodiment of the invention; Carrying out the three dimensions conversion according to following computing formula obtains said point and converge and close:
Figure BDA00001781042400031
wherein P is that said point converges the three-dimensional point cloud point in closing; (i; J) be pixel in the said depth map, i, j represent the coordinate of said pixel, d (i; J) represent said pixel (i, depth value j).
In one embodiment of the invention; In said matching module: traversal ground calculates the matching degree flow function S (V of said three-dimensional point cloud point P and each surface mesh lattice point V; P); And choose conduct of the peaked said V point of said metric function and the successful point of said P point coupling, wherein, the computing formula of said metric function is:
S ( V , P ) = Max ( N ( V ) N ( P ) , σ N ) * e - ( C ( V ) - C ( P ) ) 2 σ C 2 * e - ( X ( V ) - X ( P ) ) 2 σ X 2 , Wherein, the normal direction value of information of N (V), N (P) difference presentation surface net point V and three-dimensional point cloud point P, σ NRepresent two normal direction inner product threshold values, the colouring information value of C (V), C (P) difference presentation surface net point V and three-dimensional point cloud point P, σ CExpression distribution of color normalized threshold, the positional information value of X (V), X (P) difference presentation surface net point V and three-dimensional point cloud point P, σ XExpression range distribution normalized threshold.
In one embodiment of the invention, said manikin meets skeleton drive surfaces rule, promptly satisfies the defined formula of said skeleton drive surfaces rule:
Figure BDA00001781042400033
Wherein V is the surface mesh lattice point, T xV is that the surface mesh lattice point is out of shape T under joint rotation angle X xThe target location that obtains, j is the joint, θ jξ jBe the joint rotation parameter.
In one embodiment of the invention; In said motion tracking module; The defined formula of uniting said human skeleton drive surfaces model; According to said matching result; In conjunction with the defined formula of said skeleton drive surfaces rule, find the solution energy function
Figure BDA00001781042400034
and obtain optimized joint rotation angle X, promptly obtain following the trail of the result.
In one embodiment of the invention; Also comprise: optimal module, said optimal module are used for said manikin is carried out Laplce's surface deformation, so that said tracking result is more near actual conditions; Wherein, the computing formula of said Laplce's surface deformation is:
Figure BDA00001781042400035
Wherein || LV-δ || 2Be the constraint of Laplce's coordinate system surface geometry, || CV-q|| 2Be kinematic constraint, λ is a surface mesh distortion weight.
Unmarked motion capture device according to the embodiment of the invention based on degree of depth camera from various visual angles; Need not mark by heaviness; Can carry out motion-capturedly more flexibly easily, have the accurate robust of kinematic parameter, surface grid model and restore the advantage that degree is high, the algorithm travelling speed is fast, system cost is low.
Aspect that the present invention adds and advantage part in the following description provide, and part will become obviously from the following description, or recognize through practice of the present invention.
Description of drawings
Above-mentioned and/or the additional aspect of the present invention and advantage from below in conjunction with accompanying drawing to becoming the description of embodiment obviously and understanding easily, wherein,
Fig. 1 is according to an embodiment of the invention based on the process flow diagram of unmarked motion capture method of the degree of depth camera of various visual angles;
Fig. 2 is in accordance with another embodiment of the present invention based on the process flow diagram of unmarked motion capture method of the degree of depth camera of various visual angles;
Fig. 3 is according to an embodiment of the invention based on the structured flowchart of unmarked motion capture device of the degree of depth camera of various visual angles; And
Fig. 4 is in accordance with another embodiment of the present invention based on the structured flowchart of unmarked motion capture device of the degree of depth camera of various visual angles.
Embodiment
Describe embodiments of the invention below in detail, the example of said embodiment is shown in the drawings, and wherein identical from start to finish or similar label is represented identical or similar elements or the element with identical or similar functions.Be exemplary through the embodiment that is described with reference to the drawings below, only be used to explain the present invention, and can not be interpreted as limitation of the present invention.On the contrary, embodiments of the invention comprise and fall into appended spirit that adds the right claim and all changes, modification and the equivalent in the intension scope.
Unmarked motion capture method and device based on degree of depth camera from various visual angles based on the embodiment of the invention are described with reference to the drawings below.
Fig. 1 is according to an embodiment of the invention based on the process flow diagram of unmarked motion capture method of the degree of depth camera of various visual angles.As shown in Figure 1, this method comprises the steps:
Step S101 demarcates the degree of depth camera of various visual angles, obtains calibrating parameters.Particularly, the present invention has adopted the degree of depth camera of many different visual angles, at first need demarcate the parameter of these degree of depth cameras.In one embodiment of the invention, definition is for camera C, and calibrating parameters comprises confidential reference items matrix K c, rotation matrix Rc and translation vector Tc.Wherein, some fundamental propertys (camera focus, principal point position etc.) of this degree of depth camera of intrinsic parameter matrix reflection adopt the gridiron pattern scaling method that degree of depth camera is carried out confidential reference items usually and demarcate.Position and the attitude of outer parameter matrix reflection camera under world coordinate system.
Step S102 is through the degree of depth camera sampling depth figure and the cromogram of various visual angles.Particularly, the degree of depth camera of many different visual angles obtains deep video and color video, in the concrete operation operation, continuous video is decomposed into depth map and the cromogram that multiframe is arranged along time shaft, carries out motion-captured one by one.
Step S103 according to calibrating parameters and depth map, carries out the three dimensions conversion and obtains a little converging closing.Particularly, for the pixel on the depth map, can the coordinate information of this pixel, the depth value information of this pixel and the calibrating parameters of camera be combined, conversion obtains a three dimensions point.Thereby, many depth maps are carried out conversion and fusion, the point that finally obtains an integral body converges and closes.
In one embodiment of the invention; Carrying out the three dimensions conversion according to following computing formula obtains a little converging closing:
Figure BDA00001781042400051
wherein P converge the three-dimensional point cloud point in closing for point; (i; J) be pixel in the depth map, the coordinate of i, j remarked pixel point, d (i; J) remarked pixel point (i, depth value j).
Step S104, according to the information in depth map and the cromogram, each three-dimensional point cloud point P and each the surface mesh lattice point V on the manikin that point is converged in closing mate, and obtain matching result.
For making those skilled in the art understand the present invention better, in the introduction of being correlated with of this manikin of mentioning in to the present invention.This manikin is made up of human skeleton model and surface grid model.Human cinology's chain that the human skeleton model is made up of 31 joints constitutes, and is similar with the human physiological structure, and the turning axle in each joint is fixed, and the question essence of motion tracking asks for each joint around angle that himself turning axle rotated.10000 tri patchs that surface grid model is made up of 5000 summits are formed, the exocuticle of simulation human body.Each joint in the human skeleton model of note manikin is j, and each the surface mesh lattice point in the surface grid model of note manikin is V.Owing to have inner link between skeleton and the skin when truth is human motion, near the surface mesh lattice point the rotation of bone articulation point can drive is out of shape together, so we are defined as following form with bone drive surfaces rule: Wherein V is the surface mesh lattice point, T xV is that the surface mesh lattice point is out of shape T under joint rotation angle X xThe target location that obtains, j is the joint, θ jξ jBe the joint rotation parameter.
In one embodiment of the invention; Matching process is meant: (V P), and chooses conduct of the peaked V point of metric function and the successful point of P point coupling to the matching degree flow function S of traversal ground each surface mesh lattice point V of calculating and three-dimensional point cloud point P; Wherein, the computing formula of metric function is:
S ( V , P ) = max ( N ( V ) N ( P ) , σ N ) * e - ( C ( V ) - C ( P ) ) 2 σ C 2 * e - ( X ( V ) - X ( P ) ) 2 σ X 2
Wherein, the normal direction value of information of N (V), N (P) difference presentation surface net point V and three-dimensional point cloud point P, σ NRepresent two normal direction inner product threshold values, the colouring information value of C (V), C (P) difference presentation surface net point V and three-dimensional point cloud point P, σ cExpression distribution of color normalized threshold, the positional information value of X (V), X (P) difference presentation surface net point V and three-dimensional point cloud point P, σ XExpression range distribution normalized threshold.
Step S105 according to the matching result of step S104, carries out motion-capturedly according to human skeleton drive surfaces model, obtain following the trail of the result.
In one embodiment of the invention, motion-captured problem is converted into: the defined formula of associating human skeleton drive surfaces model T x V = Π j = 0 n Exp ( θ j ξ j ) V , Find the solution energy function Arg Min x Σ i W i | | T x V i - P i | | , Obtain optimized joint rotation angle X, promptly obtain following the trail of the result.It is pointed out that in practical application because the P in the some cloud does not have semantic information, the correspondence of strict P-V can't be confirmed.Closest approach in our the postulated point cloud is the corresponding point of this surface point, and iteration repeatedly promptly can converge to optimized parameter X then.
Fig. 2 is in accordance with another embodiment of the present invention based on the process flow diagram of unmarked motion capture method of the degree of depth camera of various visual angles.As shown in Figure 2; In a preferred embodiment of the invention, on the basis of method shown in Figure 1, further comprise step S206: manikin is carried out Laplce's surface deformation, so that follow the trail of the result more near actual conditions; Wherein, the computing formula of Laplce's surface deformation is:
Figure BDA00001781042400063
Wherein || LV-δ || 2Be the constraint of Laplce's coordinate system surface geometry, || CV-q|| 2Be kinematic constraint, λ is a surface mesh distortion weight.
Unmarked motion capture method according to the embodiment of the invention based on degree of depth camera from various visual angles; Need not mark by heaviness; Can carry out motion-capturedly more flexibly easily, have the accurate robust of kinematic parameter, surface grid model and restore the advantage that degree is high, the algorithm travelling speed is fast, system cost is low.
Fig. 3 is according to an embodiment of the invention based on the structured flowchart of unmarked motion capture device of the degree of depth camera of various visual angles.As shown in Figure 3, this method comprises like the lower part: the degree of depth camera 100 of various visual angles, demarcating module 200, some cloud modular converter 300, matching module 400 and motion-captured module 500.Wherein:
The degree of depth camera 100 of various visual angles is used for sampling depth figure and cromogram.Particularly, the degree of depth camera 100 of many different visual angles obtains deep video and color video, in the concrete operation operation, continuous video is decomposed into depth map and the cromogram that multiframe is arranged along time shaft, carries out motion-captured one by one.
Demarcating module 200 is used for the degree of depth camera 100 of various visual angles is demarcated, and obtains calibrating parameters.Particularly, in one embodiment of the invention, definition is for camera C, and calibrating parameters comprises confidential reference items matrix K c, rotation matrix Rc and translation vector Tc.Wherein, some fundamental propertys (camera focus, principal point position etc.) of this degree of depth camera of intrinsic parameter matrix reflection adopt the gridiron pattern scaling method that degree of depth camera is carried out confidential reference items usually and demarcate.Position and the attitude of outer parameter matrix reflection camera under world coordinate system.
Point cloud modular converter 300 is used for according to calibrating parameters and depth map, carries out the three dimensions conversion and obtains a little converging closing.Particularly, for the pixel on the depth map, can the coordinate information of this pixel, the depth value information of this pixel and the calibrating parameters of camera be combined, conversion obtains a three dimensions point.Thereby, many depth maps are carried out conversion and fusion, the point that finally obtains an integral body converges and closes.In one embodiment of the invention; Carrying out the three dimensions conversion according to following computing formula obtains a little converging closing: wherein P converge the three-dimensional point cloud point in closing for point; (i; J) be pixel in the depth map, the coordinate of i, j remarked pixel point, d (i; J) remarked pixel point (i, depth value j).
Matching module 400 is used for the information according to depth map and cromogram, and each three-dimensional point cloud point P and each the surface mesh lattice point V on the manikin that point is converged in closing mate, and obtain matching result.
For making those skilled in the art understand the present invention better, in the introduction of being correlated with of this manikin of mentioning in to the present invention.This manikin is made up of human skeleton model and surface grid model.Human cinology's chain that the human skeleton model is made up of 31 joints constitutes, and is similar with the human physiological structure, and the turning axle in each joint is fixed, and the question essence of motion tracking asks for each joint around angle that himself turning axle rotated.10000 tri patchs that surface grid model is made up of 5000 summits are formed, the exocuticle of simulation human body.Each joint in the human skeleton model of note manikin is j, and each the surface mesh lattice point in the surface grid model of note manikin is V.Owing to have inner link between skeleton and the skin when truth is human motion, near the surface mesh lattice point the rotation of bone articulation point can drive is out of shape together, so we are defined as following form with bone drive surfaces rule:
Figure BDA00001781042400072
Wherein V is the surface mesh lattice point, T xV is that the surface mesh lattice point is out of shape T under joint rotation angle X xThe target location that obtains, j is the joint, θ jξ jBe the joint rotation parameter.
In one embodiment of the invention; Matching process is meant: (V P), and chooses conduct of the peaked V point of metric function and the successful point of P point coupling to the matching degree flow function S of traversal ground each surface mesh lattice point V of calculating and three-dimensional point cloud point P; Wherein, the computing formula of metric function is:
S ( V , P ) = max ( N ( V ) N ( P ) , σ N ) * e - ( C ( V ) - C ( P ) ) 2 σ C 2 * e - ( X ( V ) - X ( P ) ) 2 σ X 2
Wherein, the normal direction value of information of N (V), N (P) difference presentation surface net point V and three-dimensional point cloud point P, σ NRepresent two normal direction inner product threshold values, the colouring information value of C (V), C (P) difference presentation surface net point V and three-dimensional point cloud point P, σ cExpression distribution of color normalized threshold, the positional information value of X (V), X (P) difference presentation surface net point V and three-dimensional point cloud point P, σ XExpression range distribution normalized threshold.
Motion-captured module 500 is used for according to matching result, carries out motion-capturedly through optimizing energy function, obtains following the trail of the result.In one embodiment of the invention, motion-captured problem is converted into: the defined formula of associating human skeleton drive surfaces model T x V = Π j = 0 n Exp ( θ j ξ j ) V , Find the solution energy function Arg Min x Σ i W i | | T x V i - P i | | , Obtain optimized joint rotation angle X, promptly obtain following the trail of the result.It is pointed out that in actual applications because the P in the some cloud does not have semantic information, the correspondence of strict P-V can't be confirmed.Closest approach in our the postulated point cloud is the corresponding point of this surface point, and iteration repeatedly promptly can converge to optimized parameter X then.
Fig. 4 is in accordance with another embodiment of the present invention based on the structured flowchart of unmarked motion capture device of the degree of depth camera of various visual angles.As shown in Figure 4, in a preferred embodiment of the invention, also further comprise optimal module 600.Optimal module 600 is used for manikin is carried out Laplce's surface deformation, so that follow the trail of the result more near actual conditions, wherein, the computing formula of Laplce's surface deformation is:
Figure BDA00001781042400081
Wherein || LV-δ || 2Be the constraint of Laplce's coordinate system surface geometry, || CV-q|| 2Be kinematic constraint, λ is a surface mesh distortion weight.
Unmarked motion capture device according to the embodiment of the invention based on degree of depth camera from various visual angles; Need not mark by heaviness; Can carry out motion-capturedly more flexibly easily, have the accurate robust of kinematic parameter, surface grid model and restore the advantage that degree is high, the algorithm travelling speed is fast, system cost is low.
In the description of this instructions, the description of reference term " embodiment ", " some embodiment ", " example ", " concrete example " or " some examples " etc. means the concrete characteristic, structure, material or the characteristics that combine this embodiment or example to describe and is contained at least one embodiment of the present invention or the example.In this manual, the schematic statement to above-mentioned term not necessarily refers to identical embodiment or example.And concrete characteristic, structure, material or the characteristics of description can combine with suitable manner in any one or more embodiment or example.
Although illustrated and described embodiments of the invention; For those of ordinary skill in the art; Be appreciated that under the situation that does not break away from principle of the present invention and spirit and can carry out multiple variation, modification, replacement and modification that scope of the present invention is accompanying claims and be equal to and limit to these embodiment.

Claims (14)

1. the unmarked motion capture method based on the degree of depth camera of various visual angles is characterized in that, may further comprise the steps:
A. the degree of depth camera of said various visual angles is demarcated, obtained calibrating parameters;
B. degree of depth camera sampling depth figure and the cromogram through said various visual angles;
C. according to said calibrating parameters and said depth map, carry out the three dimensions conversion and obtain a little converging closing;
D. according to the information in said depth map and the cromogram, each three-dimensional point cloud point P and each the surface mesh lattice point V on the manikin that said point is converged in closing mate, and obtain matching result;
E. based on said matching result, carry out motion-capturedly through optimizing energy function, obtain following the trail of the result.
2. the unmarked motion capture method based on degree of depth camera from various visual angles as claimed in claim 1 is characterized in that said calibrating parameters comprises confidential reference items matrix K c, rotation matrix Rc and translation vector Tc.
3. the unmarked motion capture method based on many degree of depth cameras as claimed in claim 2; It is characterized in that; Carrying out the three dimensions conversion according to following computing formula obtains said point and converge and close:
Figure FDA00001781042300011
wherein P is that said point converges the three-dimensional point cloud point in closing, and (i j) is pixel in the said depth map; I, j represent the coordinate of said pixel; (i j) representes said pixel (i, depth value j) to d.
4. the unmarked motion capture method based on degree of depth camera from various visual angles as claimed in claim 3; It is characterized in that, saidly said point is converged each three-dimensional point cloud point P and each the surface mesh lattice point V on the manikin in closing mate further and comprise:
The matching degree flow function S of the traversal ground said three-dimensional point cloud point P of calculating and each surface mesh lattice point V (V P), and chooses conduct of the peaked said V point of said metric function and the successful point of said P point coupling, and wherein, the computing formula of said metric function is: S ( V , P ) = Max ( N ( V ) N ( P ) , σ N ) * e - ( C ( V ) - C ( P ) ) 2 σ C 2 * e - ( X ( V ) - X ( P ) ) 2 σ X 2 , Wherein, the normal direction value of information of N (V), N (P) difference presentation surface net point V and three-dimensional point cloud point P, σ NRepresent two normal direction inner product threshold values, the colouring information value of C (V), C (P) difference presentation surface net point V and three-dimensional point cloud point P, σ cExpression distribution of color normalized threshold, the positional information value of X (V), X (P) difference presentation surface net point V and three-dimensional point cloud point P, σ XExpression range distribution normalized threshold.
5. the unmarked motion capture method based on degree of depth camera from various visual angles as claimed in claim 4 is characterized in that said manikin meets skeleton drive surfaces rule, promptly satisfies the defined formula of said skeleton drive surfaces rule:
Figure FDA00001781042300013
Wherein V is the surface mesh lattice point, T xV is that the surface mesh lattice point is out of shape T under joint rotation angle X xThe target location that obtains, j is the joint, θ jξ jBe the joint rotation parameter.
6. the unmarked motion capture method based on degree of depth camera from various visual angles as claimed in claim 5; It is characterized in that; Said according to said matching result; Carry out motion-capturedly through optimizing energy function, obtain following the trail of the result and comprise: according to said matching result, in conjunction with the defined formula of said skeleton drive surfaces rule; Find the solution energy function
Figure FDA00001781042300021
and obtain optimized joint rotation angle X, promptly obtain following the trail of the result.
7. the unmarked motion capture method based on degree of depth camera from various visual angles as claimed in claim 6; It is characterized in that; Also comprise step: said manikin is carried out Laplce's surface deformation; So that said tracking result is more near actual conditions, wherein, the computing formula of said Laplce's surface deformation is:
Figure FDA00001781042300022
Wherein || LV-δ || 2Be the constraint of Laplce's coordinate system surface geometry, || CV-q|| 2Be kinematic constraint, λ is a surface mesh distortion weight.
8. the unmarked motion capture device based on the degree of depth camera of various visual angles is characterized in that, comprises with the lower part:
The degree of depth camera of various visual angles is used for sampling depth figure and cromogram;
Demarcating module is used for the degree of depth camera of said various visual angles is demarcated, and obtains calibrating parameters;
Point cloud modular converter is used for. according to said calibrating parameters and said depth map, carries out the three dimensions conversion to obtain a little converging closing;
Matching module is used for the information according to said depth map and cromogram, and each three-dimensional point cloud point P and each the surface mesh lattice point V on the manikin that said point is converged in closing mate, and obtain matching result;
Motion-captured module is used for according to said matching result, carries out motion-capturedly through optimizing energy function, obtains following the trail of the result.
9. the unmarked motion capture device based on degree of depth camera from various visual angles as claimed in claim 8 is characterized in that said calibrating parameters comprises confidential reference items matrix K c, rotation matrix Rc and translation vector Tc.
10. the unmarked motion capture device based on many degree of depth cameras as claimed in claim 9; It is characterized in that; Carrying out the three dimensions conversion according to following computing formula obtains said point and converge and close: wherein P is that said point converges the three-dimensional point cloud point in closing; (i; J) be pixel in the said depth map, i, j represent the coordinate of said pixel, d (i; J) represent said pixel (i, depth value j).
11. the unmarked motion capture device based on degree of depth camera from various visual angles as claimed in claim 10 is characterized in that, in said matching module:
The matching degree flow function S of the traversal ground said three-dimensional point cloud point P of calculating and each surface mesh lattice point V (V P), and chooses conduct of the peaked said V point of said metric function and the successful point of said P point coupling, and wherein, the computing formula of said metric function is: S ( V , P ) = Max ( N ( V ) N ( P ) , σ N ) * e - ( C ( V ) - C ( P ) ) 2 σ C 2 * e - ( X ( V ) - X ( P ) ) 2 σ X 2 , Wherein, the normal direction value of information of N (V), N (P) difference presentation surface net point V and three-dimensional point cloud point P, σ NRepresent two normal direction inner product threshold values, the colouring information value of C (V), C (P) difference presentation surface net point V and three-dimensional point cloud point P, σ CExpression distribution of color normalized threshold, the positional information value of X (V), X (P) difference presentation surface net point V and three-dimensional point cloud point P, σ XExpression range distribution normalized threshold.
12. the unmarked motion capture device based on degree of depth camera from various visual angles as claimed in claim 11 is characterized in that said manikin meets skeleton drive surfaces rule, promptly satisfies the defined formula of said skeleton drive surfaces rule:
Figure FDA00001781042300031
Wherein V is the surface mesh lattice point, T xV is that the surface mesh lattice point is out of shape T under joint rotation angle X xThe target location that obtains, j is the joint, θ jξ jBe the joint rotation parameter.
13. the unmarked motion capture device based on degree of depth camera from various visual angles as claimed in claim 12; It is characterized in that; In said motion tracking module; The defined formula of uniting said human skeleton drive surfaces model is according to said matching result, in conjunction with the defined formula of said skeleton drive surfaces rule; Find the solution energy function
Figure FDA00001781042300032
and obtain optimized joint rotation angle X, promptly obtain following the trail of the result.
14. the unmarked motion capture device based on degree of depth camera from various visual angles as claimed in claim 13; It is characterized in that; Also comprise: optimal module, said optimal module are used for said manikin is carried out Laplce's surface deformation, so that said tracking result is more near actual conditions; Wherein, the computing formula of said Laplce's surface deformation is: Wherein || LV-δ || 2Be the constraint of Laplce's coordinate system surface geometry, || CV-q|| 2Be kinematic constraint, λ is a surface mesh distortion weight.
CN201210207809.2A 2012-06-18 2012-06-18 Unmarked motion capturing method and device based on multi-visual angle depth camera Active CN102800103B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210207809.2A CN102800103B (en) 2012-06-18 2012-06-18 Unmarked motion capturing method and device based on multi-visual angle depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210207809.2A CN102800103B (en) 2012-06-18 2012-06-18 Unmarked motion capturing method and device based on multi-visual angle depth camera

Publications (2)

Publication Number Publication Date
CN102800103A true CN102800103A (en) 2012-11-28
CN102800103B CN102800103B (en) 2015-02-18

Family

ID=47199200

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210207809.2A Active CN102800103B (en) 2012-06-18 2012-06-18 Unmarked motion capturing method and device based on multi-visual angle depth camera

Country Status (1)

Country Link
CN (1) CN102800103B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103040471A (en) * 2012-12-10 2013-04-17 中国农业大学 Cow figure linear identification index obtaining system and method
CN103237155A (en) * 2013-04-01 2013-08-07 北京工业大学 Tracking and positioning method of single-view-blocked target
CN104798128A (en) * 2012-10-05 2015-07-22 维迪诺蒂有限公司 Annotation method and apparatus
TWI498832B (en) * 2013-01-22 2015-09-01 Univ Nat Cheng Kung Computer implemented method and system of estimating kinematic or dynamic parameters for individuals
WO2016004863A1 (en) * 2014-07-10 2016-01-14 Perfetch, Llc, Wilmington, De Systems and methods for constructing a three dimensional (3d) color representation of an object
CN105674991A (en) * 2016-03-29 2016-06-15 深圳市华讯方舟科技有限公司 Robot positioning method and device
CN106164978A (en) * 2014-01-28 2016-11-23 西门子保健有限责任公司 Parametrization deformable net is used to construct the method and system of personalized materialization
CN107440712A (en) * 2017-04-13 2017-12-08 浙江工业大学 A kind of EEG signals electrode acquisition method based on depth inductor
CN108122275A (en) * 2017-12-22 2018-06-05 清华大学 Dynamic realtime 3 D human body method for reconstructing and system based on skeleton tracking
CN108629831A (en) * 2018-04-10 2018-10-09 清华大学 3 D human body method for reconstructing and system based on parametric human body template and inertia measurement
CN108711185A (en) * 2018-05-15 2018-10-26 清华大学 Joint rigid moves and the three-dimensional rebuilding method and device of non-rigid shape deformations
CN108981570A (en) * 2018-07-12 2018-12-11 浙江大学 A kind of portable type physical distribution package volume measurement device
CN109165646A (en) * 2018-08-16 2019-01-08 北京七鑫易维信息技术有限公司 The method and device of the area-of-interest of user in a kind of determining image
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
CN109377564A (en) * 2018-09-30 2019-02-22 清华大学 Virtual fit method and device based on monocular depth camera
CN109726666A (en) * 2018-12-25 2019-05-07 鸿视线科技(北京)有限公司 Motion capture method, system and computer readable storage medium based on calibration
CN110196031A (en) * 2019-04-26 2019-09-03 西北大学 A kind of scaling method of three-dimensional point cloud acquisition system
CN110598590A (en) * 2019-08-28 2019-12-20 清华大学 Close interaction human body posture estimation method and device based on multi-view camera
CN111275734A (en) * 2018-12-04 2020-06-12 中华电信股份有限公司 Object identification and tracking system and method thereof
CN111739080A (en) * 2020-07-23 2020-10-02 成都艾尔帕思科技有限公司 Method for constructing 3D space and 3D object by multiple depth cameras
CN112883757A (en) * 2019-11-29 2021-06-01 北京航空航天大学 Method for generating tracking attitude result
CN113246131A (en) * 2021-05-27 2021-08-13 广东智源机器人科技有限公司 Motion capture method and device, electronic equipment and mechanical arm control system
CN113487726A (en) * 2021-07-12 2021-10-08 北京未来天远科技开发有限公司 Motion capture system and method
CN113689578A (en) * 2020-05-15 2021-11-23 杭州海康威视数字技术股份有限公司 Human body data set generation method and device
CN113870358A (en) * 2021-09-17 2021-12-31 聚好看科技股份有限公司 Method and equipment for joint calibration of multiple 3D cameras
CN114829872A (en) * 2019-11-12 2022-07-29 谷歌有限责任公司 Volumetric performance capture with relighting
WO2024197890A1 (en) * 2023-03-31 2024-10-03 京东方科技集团股份有限公司 Posture recognition system and method, and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7239718B2 (en) * 2002-12-20 2007-07-03 Electronics And Telecommunications Research Institute Apparatus and method for high-speed marker-free motion capture
CN101232571A (en) * 2008-01-25 2008-07-30 北京中星微电子有限公司 Human body image matching method and video analyzing search system
CN102074019A (en) * 2010-12-28 2011-05-25 深圳泰山在线科技有限公司 Human tracking method and system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7239718B2 (en) * 2002-12-20 2007-07-03 Electronics And Telecommunications Research Institute Apparatus and method for high-speed marker-free motion capture
CN101232571A (en) * 2008-01-25 2008-07-30 北京中星微电子有限公司 Human body image matching method and video analyzing search system
CN102074019A (en) * 2010-12-28 2011-05-25 深圳泰山在线科技有限公司 Human tracking method and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KUN LI等: "Markerless Shape and Motion Capture from Multiview Video Sequences", 《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEM FOR VIDEO TECHNOLOGY》, vol. 21, no. 3, 31 March 2011 (2011-03-31), pages 320 - 334, XP011364022, DOI: doi:10.1109/TCSVT.2011.2106251 *
YEBIN LIU等: "A Point Cloud based Multi-view Stereo Algorithm for Free-viewpoint Video", 《IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS》, vol. 16, no. 3, 30 June 2010 (2010-06-30), pages 1 - 13 *
段春梅: "基于多视图的三维模型重建方法研究", 《中国博士学位论文全文数据库》, no. 04, 15 April 2010 (2010-04-15), pages 32 - 60 *

Cited By (36)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104798128A (en) * 2012-10-05 2015-07-22 维迪诺蒂有限公司 Annotation method and apparatus
CN103040471A (en) * 2012-12-10 2013-04-17 中国农业大学 Cow figure linear identification index obtaining system and method
TWI498832B (en) * 2013-01-22 2015-09-01 Univ Nat Cheng Kung Computer implemented method and system of estimating kinematic or dynamic parameters for individuals
CN103237155A (en) * 2013-04-01 2013-08-07 北京工业大学 Tracking and positioning method of single-view-blocked target
CN103237155B (en) * 2013-04-01 2016-12-28 北京工业大学 The tracking of the target that a kind of single-view is blocked and localization method
CN106164978B (en) * 2014-01-28 2019-11-01 西门子保健有限责任公司 The method and system of personalized materialization is constructed using deformable mesh is parameterized
CN106164978A (en) * 2014-01-28 2016-11-23 西门子保健有限责任公司 Parametrization deformable net is used to construct the method and system of personalized materialization
WO2016004863A1 (en) * 2014-07-10 2016-01-14 Perfetch, Llc, Wilmington, De Systems and methods for constructing a three dimensional (3d) color representation of an object
CN105674991A (en) * 2016-03-29 2016-06-15 深圳市华讯方舟科技有限公司 Robot positioning method and device
CN107440712A (en) * 2017-04-13 2017-12-08 浙江工业大学 A kind of EEG signals electrode acquisition method based on depth inductor
CN108122275A (en) * 2017-12-22 2018-06-05 清华大学 Dynamic realtime 3 D human body method for reconstructing and system based on skeleton tracking
CN108629831B (en) * 2018-04-10 2021-03-12 清华大学 Three-dimensional human body reconstruction method and system based on parameterized human body template and inertial measurement
CN108629831A (en) * 2018-04-10 2018-10-09 清华大学 3 D human body method for reconstructing and system based on parametric human body template and inertia measurement
CN108711185A (en) * 2018-05-15 2018-10-26 清华大学 Joint rigid moves and the three-dimensional rebuilding method and device of non-rigid shape deformations
CN108711185B (en) * 2018-05-15 2021-05-28 清华大学 Three-dimensional reconstruction method and device combining rigid motion and non-rigid deformation
CN108981570A (en) * 2018-07-12 2018-12-11 浙江大学 A kind of portable type physical distribution package volume measurement device
CN108981570B (en) * 2018-07-12 2020-06-09 浙江大学 Portable logistics parcel volume measuring device
CN109242887A (en) * 2018-07-27 2019-01-18 浙江工业大学 A kind of real-time body's upper limks movements method for catching based on multiple-camera and IMU
CN109165646A (en) * 2018-08-16 2019-01-08 北京七鑫易维信息技术有限公司 The method and device of the area-of-interest of user in a kind of determining image
CN109377564A (en) * 2018-09-30 2019-02-22 清华大学 Virtual fit method and device based on monocular depth camera
CN111275734B (en) * 2018-12-04 2024-02-02 台湾中华电信股份有限公司 Object identification and tracking system and method thereof
CN111275734A (en) * 2018-12-04 2020-06-12 中华电信股份有限公司 Object identification and tracking system and method thereof
CN109726666A (en) * 2018-12-25 2019-05-07 鸿视线科技(北京)有限公司 Motion capture method, system and computer readable storage medium based on calibration
CN110196031A (en) * 2019-04-26 2019-09-03 西北大学 A kind of scaling method of three-dimensional point cloud acquisition system
CN110598590A (en) * 2019-08-28 2019-12-20 清华大学 Close interaction human body posture estimation method and device based on multi-view camera
CN114829872A (en) * 2019-11-12 2022-07-29 谷歌有限责任公司 Volumetric performance capture with relighting
CN112883757A (en) * 2019-11-29 2021-06-01 北京航空航天大学 Method for generating tracking attitude result
CN113689578A (en) * 2020-05-15 2021-11-23 杭州海康威视数字技术股份有限公司 Human body data set generation method and device
CN113689578B (en) * 2020-05-15 2024-01-02 杭州海康威视数字技术股份有限公司 Human body data set generation method and device
CN111739080A (en) * 2020-07-23 2020-10-02 成都艾尔帕思科技有限公司 Method for constructing 3D space and 3D object by multiple depth cameras
CN113246131A (en) * 2021-05-27 2021-08-13 广东智源机器人科技有限公司 Motion capture method and device, electronic equipment and mechanical arm control system
CN113487726A (en) * 2021-07-12 2021-10-08 北京未来天远科技开发有限公司 Motion capture system and method
CN113487726B (en) * 2021-07-12 2024-05-14 未来元宇数字科技(北京)有限公司 Motion capture system and method
CN113870358A (en) * 2021-09-17 2021-12-31 聚好看科技股份有限公司 Method and equipment for joint calibration of multiple 3D cameras
CN113870358B (en) * 2021-09-17 2024-05-24 聚好看科技股份有限公司 Method and equipment for jointly calibrating multiple 3D cameras
WO2024197890A1 (en) * 2023-03-31 2024-10-03 京东方科技集团股份有限公司 Posture recognition system and method, and electronic device

Also Published As

Publication number Publication date
CN102800103B (en) 2015-02-18

Similar Documents

Publication Publication Date Title
CN102800103A (en) Unmarked motion capturing method and device based on multi-visual angle depth camera
US11409263B2 (en) Method for programming repeating motion of redundant robotic arm
CN102842148A (en) Method and device for capturing markerless motion and reconstructing scene
CN107562052B (en) Hexapod robot gait planning method based on deep reinforcement learning
CN103049912B (en) Random trihedron-based radar-camera system external parameter calibration method
CN106055091A (en) Hand posture estimation method based on depth information and calibration method
CN112365604B (en) AR equipment depth information application method based on semantic segmentation and SLAM
CN109035322A (en) A kind of detection of obstacles and recognition methods based on binocular vision
Wang et al. Collision-free trajectory planning in human-robot interaction through hand movement prediction from vision
CN111062326B (en) Self-supervision human body 3D gesture estimation network training method based on geometric driving
CN104325268A (en) Industrial robot three-dimensional space independent assembly method based on intelligent learning
Li et al. A framework and method for human-robot cooperative safe control based on digital twin
CN103105851B (en) Kinesthesis teaching control method based on vision sense for remote control of robot
CN104647390B (en) For the multiple-camera associating active tracing order calibration method of mechanical arm remote operating
CN110045740A (en) A kind of Mobile Robot Real-time Motion planing method based on human behavior simulation
CN115139315B (en) Picking mechanical arm grabbing motion planning method
Chen et al. Representation of truss-style structures for autonomous climbing of biped pole-climbing robots
CN106371442A (en) Tensor-product-model-transformation-based mobile robot control method
Vezzani et al. Improving superquadric modeling and grasping with prior on object shapes
Valencia et al. Toward real-time 3D shape tracking of deformable objects for robotic manipulation and shape control
Liu et al. Design of a virtual multi-interaction operation system for hand–eye coordination of grape harvesting robots
Tylecek et al. The second workshop on 3D Reconstruction Meets Semantics: Challenge results discussion
CN108491752A (en) A kind of hand gestures method of estimation based on hand Segmentation convolutional network
Yang et al. A review of visual odometry in SLAM techniques
CN115194774A (en) Binocular vision-based control method for double-mechanical-arm gripping system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant