CN102222361A - Method and system for capturing and reconstructing 3D model - Google Patents

Method and system for capturing and reconstructing 3D model Download PDF

Info

Publication number
CN102222361A
CN102222361A CN 201110167593 CN201110167593A CN102222361A CN 102222361 A CN102222361 A CN 102222361A CN 201110167593 CN201110167593 CN 201110167593 CN 201110167593 A CN201110167593 A CN 201110167593A CN 102222361 A CN102222361 A CN 102222361A
Authority
CN
China
Prior art keywords
model
dynamic
visual angle
static
reconstructing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 201110167593
Other languages
Chinese (zh)
Inventor
戴琼海
李坤
徐文立
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN 201110167593 priority Critical patent/CN102222361A/en
Publication of CN102222361A publication Critical patent/CN102222361A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The invention provides a method for capturing and reconstructing a dynamic 3D model. The method comprises the following steps of obtaining a static 3D model; converting a surface model of the static 3D model into a volume model to be used as a default scene representation of motion tracking; obtaining an initial 3D motion of top points of the model at the next moment; selecting an accurate top point from the obtained top points according to a preset space-time constraint condition and using the accurate top point as the position constraint of body deformation; and driving a Laplace body deformation framework to update to a dynamic 3D model according to the position constraint. In addition, the invention also provides a system for capturing and reconstructing a 3D model.

Description

The seizure of three-dimensional model and method for reconstructing and system
Technical field
The present invention relates to the computer video processing technology field, particularly a kind of seizure of three-dimensional model and method for reconstructing and system.
Background technology
For the three-dimensional reconstruction problem of dynamic scene, a lot of work are regarded it as static scene and are rebuild problem simply adding up on time dimension, promptly do not utilize temporal information to assist scene rebuilding, and each frame carries out static three-dimensional modeling separately.But this method complexity height, memory space are big, can not guarantee the topological consistance of model between frame and the frame, are easy to generate jitter phenomenon.Adopting said method to carry out three-dimensional modeling in addition can't effectively analyze the motion conditions of non-rigid model, also can't obtain the model of any time by the interpolation on the time domain.By this class problem is studied, prior art has proposed to unite the method for reconstructing of finding the solution 3D scene flows and geometric model.Further also proposed to use variational method to unify the reconstruction of how much of dynamic scenes and motion, yet because geometrical reconstruction and restructure from motion are that iteration is carried out, promptly by certain geometrical reconstruction constantly as the initial value of restructure from motion next Model Reconstruction constantly of deriving, therefore the space-time unite of this method reconstruction efficient is still not high, and actual effect is also also unsatisfactory.
Therefore, rebuild big, the fair problem of difficulty for fear of space-time unite, another kind of dynamic three-dimensional reconstruction method based on video is then represented as scene with the static three-dimensional reconstruction result of initial frame, applying three-dimensional motion tracking algorithm is found the solution the motion of this three-dimensional body then, and use suitable deformation algorithm driving static model motion, thereby obtain the dynamic three-dimensional reconstruction result.At present, the three-dimensional motion tracking based on video can be divided into two classes: the three-dimensional motion of tape label is followed the tracks of and unmarked three-dimensional motion is followed the tracks of.Wherein, the three-dimensional motion tracking of tape label is accurate, but the actor that need be captured is put on the fitted garment that has mark, thereby has limited the seizure to shape and texture.And unmarked three-dimensional motion track side rule has overcome above defective.A kind of unmarked three-dimensional motion tracking is the motion of catching the human body of the general more clothes of dress by Union Movement model and garment form, but this method can't capture the precise geometrical structure of motion object.Another kind of unmarked three-dimensional motion track side rule is the motion of captured object skeleton and shape simultaneously, yet because some local surfaces are not made due change in time, so this method still can't be carried out the three-dimensional motion tracking effectively.In addition, because this method only relies on profile information, so very responsive to profile errors.Though this unmarked method dirigibility has improved, and is difficult to the precision that reaches identical with the tape label method.In addition, most of three-dimensional motion trackings all need to help catch motion by extracting the kinematics skeleton, and the kinematics skeleton can only be followed the tracks of rigid motion, so the shape that this method became when often needing other scanning techniques to come auxiliary catch.At last, all methods all can not be followed the tracks of informal attire dress ornament people's motion more than.
In recent years, in computer graphics, new animation seizure and design, animate during editing and distortion transmission method continue to bring out.These methods are no longer dependent on kinematics skeleton and kinematic parameter, and are based on surface model and general warpage method, thereby can catch the distortion of rigid body and non-rigid body.But, all this motion-captured and restoration methods based on multi-angle video, the static three-dimensional reconstruction of initial frame all needs to adopt laser scanner to carry out.Though laser scanner can obtain the high accuracy three-dimensional reconstructed results, the laser scanner costliness, to waste time and energy, the scan period people must be in complete transfixion state.And for the convenience of follow-up work, the people normally two holds that fist stands, and captured multi-angle video also is that both hands are clenched fist and done action.In addition, the method so that the reconstructed results of laser scanner is represented as initial scene is all keeping the surface characteristics of some on the model when scanning in the whole dynamic 3 D sequence of being recovered always, as the fold of clothes etc.
Summary of the invention
Purpose of the present invention is intended to solve above-mentioned technological deficiency at least, the present invention proposes seizure and the method for reconstructing and the system of a kind of static state and dynamic 3 D model.
For achieving the above object, one aspect of the present invention has proposed a kind of seizure and method for reconstructing of static three-dimensional model, may further comprise the steps: image acquisition is carried out in the moving object in the toroidal field; Obtain the visual shell model; Obtain the depth point cloud at each visual angle according to each visual angle image, described visual shell model and preset restriction condition; The depth point cloud at described each visual angle of obtaining merged obtain static three-dimensional model.
The present invention has also proposed a kind of seizure and reconstructing system of static three-dimensional model on the other hand, comprising: a plurality of video cameras around toroidal field are used for image acquisition is carried out in the moving object of toroidal field; The static three-dimensional model reconstructing device, be used to obtain the visual shell model, and obtain the depth point cloud at each visual angle, and the depth point cloud at described each visual angle of obtaining merged obtain static three-dimensional model according to each visual angle image, described visual shell model and preset restriction condition.
Further aspect of the present invention has also proposed a kind of seizure and method for reconstructing of dynamic 3 D model, may further comprise the steps: obtain static three-dimensional model; The surface model of static three-dimensional model is converted to the phantom type, and its acquiescence scene as motion tracking is represented; Obtain model vertices in next initial three-dimensional motion constantly; According to predetermined when empty constraint condition from the summit that obtains, select the position constraint of accurate summit as body deformability; Drive Laplce's body deformability framework according to described position constraint and upgrade the dynamic 3 D model.
Further aspect of the present invention has also proposed a kind of seizure and reconstructing system of dynamic 3 D model, comprising: a plurality of video cameras around toroidal field are used for image acquisition is carried out in the moving object of toroidal field; The static three-dimensional model deriving means is used to obtain static three-dimensional model; Dynamic 3 D Model Reconstruction device, be used for the surface model of static three-dimensional model is converted to the phantom type, and its acquiescence scene as motion tracking represented, and obtain model vertices in next initial three-dimensional motion constantly, and according to predetermined when empty constraint condition from the summit that obtains, selects the position constraint of accurate summit as body deformability, drive Laplce's body deformability framework renewal dynamic 3 D model according to described position constraint.
Can guarantee the accuracy and the integrality of static three-dimensional model reconstruction shape by the present invention, the new three-dimensional motion method of estimation that the present invention is based on the rarefaction representation Design Theory in addition, and optimize framework based on the distortion of phantom type, therefore can access the dynamic reconstructed results of high-quality.In addition, the present invention can not rely on spatial digitizer and optical markings, so cost is not high, and can follow the tracks of informal attire dress ornament people's motion.
Aspect that the present invention adds and advantage part in the following description provide, and part will become obviously from the following description, or recognize by practice of the present invention.
Description of drawings
Above-mentioned and/or additional aspect of the present invention and advantage are from obviously and easily understanding becoming the description of embodiment below in conjunction with accompanying drawing, wherein:
Fig. 1 is the seizure and the method for reconstructing process flow diagram of the static three-dimensional model of the embodiment of the invention;
Fig. 2 be 20 video cameras of the embodiment of the invention ringwise distribution rings around scene to be collected;
Fig. 3 is the seizure and the method for reconstructing process flow diagram of the dynamic 3 D model of the embodiment of the invention;
Fig. 4 is the schematic block diagram of the whole dynamic three-dimensional reconstruction method of the embodiment of the invention; And
The dynamic 3 D model result of Fig. 5 for adopting the inventive method to obtain to two long time seriess.
Embodiment
Describe embodiments of the invention below in detail, the example of described embodiment is shown in the drawings, and wherein identical from start to finish or similar label is represented identical or similar elements or the element with identical or similar functions.Below by the embodiment that is described with reference to the drawings is exemplary, only is used to explain the present invention, and can not be interpreted as limitation of the present invention.
The embodiment of the invention has proposed the seizure and the method for reconstructing of static three-dimensional model and dynamic 3 D model respectively; can be but need to prove the seizure of dynamic 3 D model and rebuild based on the static three-dimensional model that obtains by the present invention; also can be based on the static three-dimensional model that obtains by other means; existing spatial digitizer etc. for example, these all should be included within protection scope of the present invention.
As shown in Figure 1, seizure and method for reconstructing process flow diagram for the static three-dimensional model of the embodiment of the invention may further comprise the steps:
Step S101 carries out image acquisition to the moving object in the toroidal field.For example, be provided with 20 video cameras in toroidal field, the frame per second of each video camera was 30 frame/seconds, and control is respectively organized video camera the moving object in the toroidal field is gathered.Certainly those skilled in the art also can select more video camera to obtain more visual angle image, also can reduce the quantity of video camera certainly, and these all should be included within protection scope of the present invention.An example of the present invention, as shown in Figure 2, for 20 video cameras of the embodiment of the invention ringwise distribution rings around scene to be collected.Wherein, Ci represents the i video camera.The resolution of camera acquisition image is 1024 * 768.The personage that gathers stands in annular center.
Step S102 obtains the visual shell model (visual hull) of initial time.
Step S103 obtains the depth point cloud at each visual angle according to each visual angle image, described visual shell model and preset restriction condition.Specifically can comprise:
Step S201 asks friendship to obtain the visible point cloud at each visual angle on each visual angle image and the described visual shell model that obtains.
Step S202 projects to this visual angle image with the visible point cloud at each visual angle, obtains the estimation of initial depth point cloud, and promptly d=(a, b, 1) is along the skew to the polar curve direction.
Step S203 estimates and described preset restriction condition obtains accurate depth point cloud according to initial depth point cloud, and wherein, described preset restriction condition comprises intrafascicular approximately one or more of utmost point geometrical constraint, brightness constraint, gradient constraint and flatness.In a preferred embodiment of the invention, can comprise above-mentioned four kinds of constraints simultaneously, obtain accurate depth point cloud by following formula:
E ( a , b ) = ∫ Ω β ( x ) Ψ ( | I ( x b + d ) - I ( x ) | 2 + γ | ▿ I ( x b + d ) - ▿ I ( x ) | 2 ) dx + α ∫ Ω Ψ ( | ▿ a | 2 + | ▿ b | 2 ) dx ,
Wherein, x:=(x, y, c) defined a location of pixels in the reference viewing angle c image (x, y), its brightness is defined as I (x); x b:=(x b, y b, c) being antipodal points on the c+1 of visual angle, w is the skew of the corresponding point of x on the c+1 of visual angle;
Figure BDA0000069947620000042
Be the spatial gradient operator; β (x) is an Occlusion Map, is 1 for the pixel in unshielding zone, otherwise is 0.Consider the influence of wild value in the model hypothesis, we adopt the penalty of robust
Figure BDA0000069947620000043
Produce a total variation regularization, wherein ε is that this formula of a very little value (being made as 0.001 in experiment) has comprised four constraints: to utmost point geometrical constraint (x b+ d=x+w), brightness constraint (I (x b+ d)=and I (x)), gradient constraint
Figure BDA0000069947620000044
Retrain with flatness
Figure BDA0000069947620000045
Step S104 merges the depth point cloud at described each visual angle of obtaining and to obtain static three-dimensional model.Specifically can may further comprise the steps:
Step S301 removes some wild values with the depth point cloud fusion at each visual angle and by the profile constraint.
Step S302 rebuilds the full surface model by mobile cube method, obtains static three-dimensional model.
Can guarantee that by the present invention static three-dimensional model rebuilds the accuracy and the integrality of shape, the accuracy of static three-dimensional model and integrality are the bases of dynamic 3 D Model Reconstruction.
As shown in Figure 3, seizure and method for reconstructing process flow diagram for the dynamic 3 D model of the embodiment of the invention may further comprise the steps:
Step S401 is converted to the phantom type with the surface model of static three-dimensional model, and its acquiescence scene as motion tracking is represented.
Step S402 obtains model vertices in next initial three-dimensional motion constantly.Particularly, can may further comprise the steps:
Step S501 calculates next light stream of each visual angle image constantly.
Step S502 asks for the scene flows of visible point from each visual angle light stream and the light stream of adjacent visual angle, and composing for the scene flows of invisible point is a big relatively value, and for example 10000.
Step S503, in the hope of each visual angle scene flows for row, structural matrix M ∈ M * n, wherein m is the surface vertices number.
Step S504 based on the rarefaction representation theory, recovers problem by finding the solution following low-rank matrix, obtains new matrix X.
minimize ‖X‖ *
subject?to
Figure BDA0000069947620000046
Wherein, X is a known variables, Ω be [m] * [n] complete element set a subclass ([n] be defined as ordered series of numbers 1 ..., n}),
Figure BDA0000069947620000047
For sampling operation, be defined as
Figure BDA0000069947620000051
Step S505 is with the motion of each mean value of going among the matrix X as this row institute corresponding vertex
Figure BDA0000069947620000052
Thereby obtain next vertex position constantly
Figure BDA0000069947620000053
Step S403, according to predetermined when empty constraint condition from the summit that obtains, select the position constraint of accurate summit as body deformability.In embodiments of the present invention, described predetermined when empty constraint condition comprise:
C sp = 1 N Σ n = 0 N - 1 ( 1 - P sil n ( v i ′ ) ) ,
C tmp = 1 N v Σ n ∈ v ( i ) ( 1 - P z n ( p ( v i ) , p ( v i ′ ) ) ) ,
C smth = | | f → ( v i ) - 1 N s Σ j ∈ N f → ( v j ) | | .
Wherein,
Figure BDA0000069947620000057
Be the profile errors of estimated value, if v ' iProject to next constantly the pixel on camera n image within profile then this functional value be 1, otherwise be 0; V (i) is v iThe set of visible camera; N vIt is visible camera number;
Figure BDA0000069947620000058
Calculate v iAnd v ' iIn the ZNCC correlativity between the projected position on the camera n image; N sBe vertex v iThe immediate neighbor number.
Step S404 drives Laplce's body deformability framework according to position constraint and upgrades the dynamic 3 D model.Particularly, comprising:
Step S601 sets up following Laplce's body deformability linear system, for each v ' i, have
Σ j ∈ N ( i ) ω ij ( v i ′ - v j ′ ) = Σ j ∈ N ( i ) ω ij 2 ( R i + R j ) ( v i - v j ) ,
Wherein, R iAnd R jBe rotation matrix, and be initialized as unit matrix.
Step S602, the definition covariance matrix
C i = Σ j ∈ N ( i ) ω ij ( v i - v j ) ( v i ′ - v j ′ ) T = V i D i V i ′ T ,
To C iCarrying out svd has
Figure BDA00000699476200000511
So
Figure BDA00000699476200000512
If det is (R i)≤0 then changes U iIn corresponding to the symbol of the row of minimum singular value;
Step S603, if profile errors less than given threshold value, new model more then, otherwise return step S601.
As the preferred embodiments of the present invention, the seizure and the method for reconstructing of static three-dimensional model that the present invention is above-mentioned and dynamic 3 D model can use simultaneously, as shown in Figure 4, are the schematic block diagram of the whole dynamic three-dimensional reconstruction method of the embodiment of the invention.
As shown in Figure 5, the dynamic 3 D model result for adopting the proposed invention method to obtain to two long time seriess.Wherein, first width of cloth figure of each sequence results is total figure that each moment model is put together, and the picture of back is respectively each modeling result constantly.
The embodiment of the invention has also proposed a kind of seizure and reconstructing system of static three-dimensional model, and this system comprises: around a plurality of video cameras and the static three-dimensional model reconstructing device of toroidal field.Wherein, a plurality of video cameras around toroidal field are used for image acquisition is carried out in the moving object of toroidal field; The static three-dimensional model reconstructing device is used to obtain the visual shell model, and obtain the depth point cloud at each visual angle, and the depth point cloud at described each visual angle of obtaining merged obtain static three-dimensional model according to each visual angle image, described visual shell model and preset restriction condition.Wherein, the concrete course of work of static three-dimensional model reconstructing device can not repeat them here with reference to the seizure of above static three-dimensional model and the embodiment of method for reconstructing.
In addition, the embodiment of the invention has also proposed a kind of seizure and reconstructing system of dynamic 3 D model, comprising: around a plurality of video cameras, static three-dimensional model deriving means and the dynamic 3 D Model Reconstruction device of toroidal field.Wherein, a plurality of video cameras around toroidal field are used for image acquisition is carried out in the moving object of toroidal field; The static three-dimensional model deriving means is used to obtain static three-dimensional model; Dynamic 3 D Model Reconstruction device, be used for the surface model of static three-dimensional model is converted to the phantom type, and its acquiescence scene as motion tracking represented, and obtain model vertices in next initial three-dimensional motion constantly, and according to predetermined when empty constraint condition from the summit that obtains, selects the position constraint of accurate summit as body deformability, drive Laplce's body deformability framework renewal dynamic 3 D model according to described position constraint.Wherein, the concrete course of work static and dynamic 3 D Model Reconstruction device can not repeat them here with reference to the seizure of above static state and dynamic 3 D model and the embodiment of method for reconstructing.
Can guarantee the accuracy and the integrality of static three-dimensional model reconstruction shape by the present invention, the new three-dimensional motion method of estimation that the present invention is based on the rarefaction representation Design Theory in addition, and optimize framework based on the distortion of phantom type, therefore can access the dynamic reconstructed results of high-quality.In addition, the present invention can not rely on spatial digitizer and optical markings, so cost is not high, and can follow the tracks of informal attire dress ornament people's motion.
Although illustrated and described embodiments of the invention, for the ordinary skill in the art, be appreciated that without departing from the principles and spirit of the present invention and can carry out multiple variation, modification, replacement and modification that scope of the present invention is by claims and be equal to and limit to these embodiment.

Claims (15)

1. the seizure of a dynamic 3 D model and method for reconstructing is characterized in that, may further comprise the steps:
Obtain static three-dimensional model;
The surface model of static three-dimensional model is converted to the phantom type, and its acquiescence scene as motion tracking is represented;
Obtain model vertices in next initial three-dimensional motion constantly;
According to predetermined when empty constraint condition from the summit that obtains, select the position constraint of accurate summit as body deformability;
Drive Laplce's body deformability framework according to described position constraint and upgrade the dynamic 3 D model.
2. the seizure of dynamic 3 D model as claimed in claim 1 and method for reconstructing is characterized in that, described static three-dimensional model is measured and preserved, and perhaps described static three-dimensional model is that the scene obtains,
Wherein, described scene obtains static three-dimensional model and comprises:
Image acquisition is carried out in moving object in the toroidal field;
Obtain the visual shell model;
Obtain the depth point cloud at each visual angle according to each visual angle image, described visual shell model and preset restriction condition;
The depth point cloud at described each visual angle of obtaining merged obtain static three-dimensional model.
3. the seizure of dynamic 3 D model as claimed in claim 2 and method for reconstructing is characterized in that, the described depth point cloud that obtains each visual angle according to visual shell model and preset restriction condition comprises:
Ask friendship to obtain the visible point cloud at each visual angle on each visual angle image and the described visual shell model that obtains;
The visible point cloud at each visual angle is projected to this visual angle image, obtain initial depth point cloud and estimate;
Obtain accurate depth point cloud according to described initial depth point cloud estimation and described preset restriction condition.
4. the seizure of dynamic 3 D model as claimed in claim 3 and method for reconstructing is characterized in that, described preset restriction condition comprises intrafascicular approximately one or more of utmost point geometrical constraint, brightness constraint, gradient constraint and flatness.
5. the seizure of dynamic 3 D model as claimed in claim 4 and method for reconstructing is characterized in that, obtain accurate depth point cloud by following formula:
E ( a , b ) = ∫ Ω β ( x ) Ψ ( | I ( x b + d ) - I ( x ) | 2 + γ | ▿ I ( x b + d ) - ▿ I ( x ) | 2 ) dx + α ∫ Ω Ψ ( | ▿ a | 2 + | ▿ b | 2 ) dx ,
Wherein, d=(a, b, 1) is that initial depth point cloud estimates, x:=(x, y, c) be in the reference viewing angle c image a location of pixels (x, y), I (x) is the brightness of this location of pixels; x b:=(x b, y b, c) being antipodal points on the c+1 of visual angle, w is the skew of the corresponding point of x on the c+1 of visual angle;
Figure FDA0000069947610000012
Be the spatial gradient operator; β (x) is an Occlusion Map,
Figure FDA0000069947610000013
Penalty for robust.
6. the seizure of dynamic 3 D model as claimed in claim 2 and method for reconstructing is characterized in that, described depth point cloud to described each visual angle of obtaining merges and obtains static three-dimensional model and comprise:
Remove wild value with the depth point cloud fusion at described each visual angle and by the profile constraint;
Rebuild the full surface model by mobile cube method, obtain static three-dimensional model.
7. the seizure of dynamic 3 D model as claimed in claim 1 and method for reconstructing is characterized in that, the described model vertices of obtaining comprises in next initial three-dimensional motion constantly:
Calculate next light stream of each visual angle image constantly;
Ask for the scene flows of visible point from each visual angle light stream and the light stream of adjacent visual angle, composing for the scene flows of invisible point is a big relatively value;
In the hope of each visual angle scene flows for row, structural matrix M ∈ M * n, wherein m is the surface vertices number;
Based on the theoretical matrix X that obtains of rarefaction representation;
With the motion of each mean value of going among the matrix X as this row institute corresponding vertex Thereby obtain next vertex position constantly
Figure FDA0000069947610000022
8. the seizure of dynamic 3 D model as claimed in claim 7 and method for reconstructing is characterized in that, describedly comprise based on the theoretical matrix X that obtains of rarefaction representation:
Obtain new matrix X by finding the solution following low-rank matrix recovery problem,
minimize ‖X‖ *
subject?to
Wherein, X is a known variables, Ω be [m] * [n] complete element set a subclass ([n] be defined as ordered series of numbers 1 ..., n}),
Figure FDA0000069947610000024
For sampling operation, be defined as
Figure FDA0000069947610000025
9. the seizure of dynamic 3 D model as claimed in claim 1 and method for reconstructing is characterized in that, described predetermined when empty constraint condition comprise:
C sp = 1 N Σ n = 0 N - 1 ( 1 - P sil n ( v i ′ ) ) ,
C tmp = 1 N v Σ n ∈ v ( i ) ( 1 - P z n ( p ( v i ) , p ( v i ′ ) ) ) ,
C smth = | | f → ( v i ) - 1 N s Σ j ∈ N f → ( v j ) | | .
Wherein,
Figure FDA0000069947610000029
Be the profile errors of estimated value, if v ' iProject to next constantly the pixel on camera n image within profile then this functional value be 1, otherwise be 0; V (i) is v iThe set of visible camera; N vIt is visible camera number;
Figure FDA00000699476100000210
Calculate v iAnd v ' iIn the ZNCC correlativity between the projected position on the camera n image; N sBe vertex v iThe immediate neighbor number.
10. the seizure of dynamic 3 D model as claimed in claim 1 and method for reconstructing is characterized in that, describedly drive Laplce's body deformability framework according to position constraint and upgrade the dynamic 3 D model and comprise:
The initialization rotation matrix is a unit matrix, R i=R j=I;
Utilize Laplce's body deformability to be optimized;
Obtain new rotation matrix R iAnd R j
Whether judge profile errors less than predetermined value, if less than described predetermined value then upgrade the dynamic 3 D model, if be not less than described predetermined value then continue to utilize Laplce's body deformability to be optimized.
11. the seizure of a dynamic 3 D model and reconstructing system is characterized in that, comprising:
A plurality of video cameras around toroidal field are used for image acquisition is carried out in the moving object of toroidal field;
The static three-dimensional model deriving means is used to obtain static three-dimensional model;
Dynamic 3 D Model Reconstruction device, be used for the surface model of static three-dimensional model is converted to the phantom type, and its acquiescence scene as motion tracking represented, and obtain model vertices in next initial three-dimensional motion constantly, and according to predetermined when empty constraint condition from the summit that obtains, selects the position constraint of accurate summit as body deformability, drive Laplce's body deformability framework renewal dynamic 3 D model according to described position constraint.
12. the seizure of dynamic 3 D model as claimed in claim 11 and reconstructing system is characterized in that, described static three-dimensional model deriving means obtains static three-dimensional model and comprises:
Image acquisition is carried out in moving object in the toroidal field;
Obtain the visual shell model;
Obtain the depth point cloud at each visual angle according to each visual angle image, described visual shell model and preset restriction condition;
The depth point cloud at described each visual angle of obtaining merged obtain static three-dimensional model.
13. the seizure of dynamic 3 D model as claimed in claim 12 and reconstructing system is characterized in that, described preset restriction condition comprises intrafascicular approximately one or more of utmost point geometrical constraint, brightness constraint, gradient constraint and flatness.
14. the seizure of dynamic 3 D model as claimed in claim 13 and reconstructing system is characterized in that, obtain accurate depth point cloud by following formula:
E ( a , b ) = ∫ Ω β ( x ) Ψ ( | I ( x b + d ) - I ( x ) | 2 + γ | ▿ I ( x b + d ) - ▿ I ( x ) | 2 ) dx + α ∫ Ω Ψ ( | ▿ a | 2 + | ▿ b | 2 ) dx ,
Wherein, d=(a, b, 1) is that initial depth point cloud estimates, x:=(x, y, c) be in the reference viewing angle c image a location of pixels (x, y), I (x) is the brightness of this location of pixels; x b:=(x b, y b, c) being antipodal points on the c+1 of visual angle, w is the skew of the corresponding point of x on the c+1 of visual angle;
Figure FDA0000069947610000032
Be the spatial gradient operator; β (x) is an Occlusion Map,
Figure FDA0000069947610000033
Penalty for robust.
15. the seizure of dynamic 3 D model as claimed in claim 11 and reconstructing system is characterized in that, described predetermined when empty constraint condition comprise:
C sp = 1 N Σ n = 0 N - 1 ( 1 - P sil n ( v i ′ ) ) ,
C tmp = 1 N v Σ n ∈ v ( i ) ( 1 - P z n ( p ( v i ) , p ( v i ′ ) ) ) ,
C smth = | | f → ( v i ) - 1 N s Σ j ∈ N f → ( v j ) | | .
Wherein,
Figure FDA0000069947610000041
Be the profile errors of estimated value, if v ' iProject to next constantly the pixel on camera n image within profile then this functional value be 1, otherwise be 0; V (i) is v iThe set of visible camera; N vIt is visible camera number;
Figure FDA0000069947610000042
Calculate v iAnd v ' iIn the ZNCC correlativity between the projected position on the camera n image; N sBe vertex v iThe immediate neighbor number.
CN 201110167593 2010-04-06 2010-04-06 Method and system for capturing and reconstructing 3D model Pending CN102222361A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110167593 CN102222361A (en) 2010-04-06 2010-04-06 Method and system for capturing and reconstructing 3D model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110167593 CN102222361A (en) 2010-04-06 2010-04-06 Method and system for capturing and reconstructing 3D model

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN2010101411826A Division CN101833786B (en) 2010-04-06 2010-04-06 Method and system for capturing and rebuilding three-dimensional model

Publications (1)

Publication Number Publication Date
CN102222361A true CN102222361A (en) 2011-10-19

Family

ID=44778905

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110167593 Pending CN102222361A (en) 2010-04-06 2010-04-06 Method and system for capturing and reconstructing 3D model

Country Status (1)

Country Link
CN (1) CN102222361A (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102651072A (en) * 2012-04-06 2012-08-29 浙江大学 Classification method for three-dimensional human motion data
CN104599314A (en) * 2014-06-12 2015-05-06 深圳奥比中光科技有限公司 Three-dimensional model reconstruction method and system
CN104851129A (en) * 2015-05-21 2015-08-19 成都绿野起点科技有限公司 Multi-view-based 3D reconstruction method
CN104915978A (en) * 2015-06-18 2015-09-16 天津大学 Realistic animation generation method based on Kinect
CN105654472A (en) * 2015-12-25 2016-06-08 陕西师范大学 Projective reconstruction method based on trajectory basis
CN106097328A (en) * 2016-06-07 2016-11-09 陕西师范大学 A kind of image missing values restoration methods based on non-rigid track base
CN106683178A (en) * 2016-12-30 2017-05-17 天津大学 Method for recovering three-dimensional framework by low-rank matrix on basis of graph theory
WO2018045532A1 (en) * 2016-09-08 2018-03-15 深圳市大富网络技术有限公司 Method for generating square animation and related device
CN108881885A (en) * 2017-04-10 2018-11-23 钰立微电子股份有限公司 Advanced treatment system
CN109086492A (en) * 2018-07-11 2018-12-25 大连理工大学 A kind of wire frame representation of body structure threedimensional model and deformation method and system
CN111199576A (en) * 2019-12-25 2020-05-26 中国人民解放军军事科学院国防科技创新研究院 Outdoor large-range human body posture reconstruction method based on mobile platform
CN112651357A (en) * 2020-12-30 2021-04-13 浙江商汤科技开发有限公司 Segmentation method of target object in image, three-dimensional reconstruction method and related device
WO2022087932A1 (en) * 2020-10-29 2022-05-05 Huawei Technologies Co., Ltd. Non-rigid 3d object modeling using scene flow estimation
WO2022116459A1 (en) * 2020-12-04 2022-06-09 深圳市慧鲤科技有限公司 Three-dimensional model construction method and apparatus, and device, storage medium and computer program

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101000499A (en) * 2006-12-18 2007-07-18 浙江大学 Contour machining method and system based on multi-sensor integral measuring
JP2007289704A (en) * 2006-04-21 2007-11-08 Siemens Medical Solutions Usa Inc System and method for semi-automatic aortic aneurysm analysis

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007289704A (en) * 2006-04-21 2007-11-08 Siemens Medical Solutions Usa Inc System and method for semi-automatic aortic aneurysm analysis
CN101000499A (en) * 2006-12-18 2007-07-18 浙江大学 Contour machining method and system based on multi-sensor integral measuring

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2006》 20060705 Mouragnon, E. Real Time Localization and 3D Reconstruction 第363-370页 1-15 第1卷, *
《工程图学学报》 20061231 王祎 等 二维建筑结构图的三维模型重建 第79-83页 1-15 , 第2期 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102651072A (en) * 2012-04-06 2012-08-29 浙江大学 Classification method for three-dimensional human motion data
CN104599314A (en) * 2014-06-12 2015-05-06 深圳奥比中光科技有限公司 Three-dimensional model reconstruction method and system
CN104851129B (en) * 2015-05-21 2018-01-23 成都绿野起点科技有限公司 A kind of 3D method for reconstructing based on multiple views
CN104851129A (en) * 2015-05-21 2015-08-19 成都绿野起点科技有限公司 Multi-view-based 3D reconstruction method
CN104915978A (en) * 2015-06-18 2015-09-16 天津大学 Realistic animation generation method based on Kinect
CN105654472A (en) * 2015-12-25 2016-06-08 陕西师范大学 Projective reconstruction method based on trajectory basis
CN105654472B (en) * 2015-12-25 2018-10-23 陕西师范大学 A kind of projective reconstruction method based on track base
CN106097328A (en) * 2016-06-07 2016-11-09 陕西师范大学 A kind of image missing values restoration methods based on non-rigid track base
CN106097328B (en) * 2016-06-07 2019-05-14 陕西师范大学 A kind of image missing values restoration methods based on non-rigid track base
CN108140252A (en) * 2016-09-08 2018-06-08 深圳市大富网络技术有限公司 A kind of generation method and relevant device of square animation
WO2018045532A1 (en) * 2016-09-08 2018-03-15 深圳市大富网络技术有限公司 Method for generating square animation and related device
CN106683178A (en) * 2016-12-30 2017-05-17 天津大学 Method for recovering three-dimensional framework by low-rank matrix on basis of graph theory
CN106683178B (en) * 2016-12-30 2020-04-28 天津大学 Graph theory-based low-rank matrix three-dimensional framework recovery method
CN108881885A (en) * 2017-04-10 2018-11-23 钰立微电子股份有限公司 Advanced treatment system
CN109086492A (en) * 2018-07-11 2018-12-25 大连理工大学 A kind of wire frame representation of body structure threedimensional model and deformation method and system
CN109086492B (en) * 2018-07-11 2022-12-13 大连理工大学 Wire frame representation and deformation method and system for three-dimensional model of vehicle body structure
CN111199576A (en) * 2019-12-25 2020-05-26 中国人民解放军军事科学院国防科技创新研究院 Outdoor large-range human body posture reconstruction method based on mobile platform
CN111199576B (en) * 2019-12-25 2023-08-18 中国人民解放军军事科学院国防科技创新研究院 Outdoor large-range human body posture reconstruction method based on mobile platform
WO2022087932A1 (en) * 2020-10-29 2022-05-05 Huawei Technologies Co., Ltd. Non-rigid 3d object modeling using scene flow estimation
WO2022116459A1 (en) * 2020-12-04 2022-06-09 深圳市慧鲤科技有限公司 Three-dimensional model construction method and apparatus, and device, storage medium and computer program
CN112651357A (en) * 2020-12-30 2021-04-13 浙江商汤科技开发有限公司 Segmentation method of target object in image, three-dimensional reconstruction method and related device
WO2022142311A1 (en) * 2020-12-30 2022-07-07 浙江商汤科技开发有限公司 Method for segmenting target object in image, three-dimensional reconstruction method, and related apparatus
CN112651357B (en) * 2020-12-30 2024-05-24 浙江商汤科技开发有限公司 Method for segmenting target object in image, three-dimensional reconstruction method and related device

Similar Documents

Publication Publication Date Title
CN101833786B (en) Method and system for capturing and rebuilding three-dimensional model
CN102222361A (en) Method and system for capturing and reconstructing 3D model
Janai et al. Slow flow: Exploiting high-speed cameras for accurate and diverse optical flow reference data
Kerl et al. Dense continuous-time tracking and mapping with rolling shutter RGB-D cameras
Dou et al. Scanning and tracking dynamic objects with commodity depth cameras
CN109242873A (en) A method of 360 degree of real-time three-dimensionals are carried out to object based on consumer level color depth camera and are rebuild
Furukawa et al. Dense 3d motion capture for human faces
CN102231792B (en) Electronic image stabilization method based on characteristic coupling
Liu et al. Deep shutter unrolling network
US20060244757A1 (en) Methods and systems for image modification
Chen et al. Accurate and robust 3d facial capture using a single rgbd camera
KR20140108828A (en) Apparatus and method of camera tracking
CN108711185A (en) Joint rigid moves and the three-dimensional rebuilding method and device of non-rigid shape deformations
CN104915978A (en) Realistic animation generation method based on Kinect
CN103002309B (en) Depth recovery method for time-space consistency of dynamic scene videos shot by multi-view synchronous camera
KR101193223B1 (en) 3d motion tracking method of human's movement
Petit et al. Combining complementary edge, keypoint and color features in model-based tracking for highly dynamic scenes
Sizintsev et al. Spatiotemporal stereo and scene flow via stequel matching
Yuan et al. Temporal upsampling of depth maps using a hybrid camera
Hilsmann et al. Realistic cloth augmentation in single view video under occlusions
Fechteler et al. Real-time avatar animation with dynamic face texturing
Fang et al. Rototexture: Automated tools for texturing raw video
De Aguiar et al. Marker-less 3D feature tracking for mesh-based human motion capture
Suttasupa et al. Plane detection for Kinect image sequences
Qu et al. Fast rolling shutter correction in the wild

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20111019