CN105654472B - A kind of projective reconstruction method based on track base - Google Patents

A kind of projective reconstruction method based on track base Download PDF

Info

Publication number
CN105654472B
CN105654472B CN201510991797.0A CN201510991797A CN105654472B CN 105654472 B CN105654472 B CN 105654472B CN 201510991797 A CN201510991797 A CN 201510991797A CN 105654472 B CN105654472 B CN 105654472B
Authority
CN
China
Prior art keywords
characteristic point
matrix
formula
value
image sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510991797.0A
Other languages
Chinese (zh)
Other versions
CN105654472A (en
Inventor
刘侍刚
李丹丹
彭亚丽
裘国永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN201510991797.0A priority Critical patent/CN105654472B/en
Publication of CN105654472A publication Critical patent/CN105654472A/en
Application granted granted Critical
Publication of CN105654472B publication Critical patent/CN105654472B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The present invention relates to a kind of projective reconstruction methods based on track base, it assumes that non-rigid is made of several track bases, first extract the characteristic point data of image sequence, establish the three-dimensional homogeneous trajectory coordinates of characteristic point, recycle the characteristic of image array low-rank, carry out singular value decomposition, it is carried out at the same time solution depth factor with row vector constraint and column vector constraint, realize projective reconstruction, the method of the present invention improves arithmetic speed and the robustness of algorithm, ensure fast convergence rate, re-projection effect is good, and the computational methods of the present invention are combined for the perspective model, so that rebuilding effect closer to really, error smaller.

Description

A kind of projective reconstruction method based on track base
Technical field
The invention belongs to computer vision research technical field, the penetrating based on track base carried out especially for non-rigid Shadow method for reconstructing.
Background technology
Three-dimensional reconstruction based on image sequence is the hot issue of computer vision research, and projective reconstruction is three-dimensional reconstruction A necessary process, precision will have a direct impact on the result of three-dimensional reconstruction.Three-dimensional reconstruction biomedical, game manufacturing industry, The fields such as animation making have a wide range of applications, therefore study it important research significance and practical value.By The development of recent two decades, the research of rigid body have leveled off to maturation, lay a good foundation for research non-rigid later.Big portion in life Componental movement is all flexible, belongs to non-rigid, but structure can change when non-rigid motion, and motion conditions are complicated.2000, The paper that Bregler et al. is delivered in international computer vision and pattern-recognition meeting《Restore the three of non-rigid from image stream Tie up shape》(Bregler C,Hertzmann A,Biermann H.Recovering non-rigid 3D shape from image streams[J].Conf on Computer Vision&Pattern Recognition,2000,2:690-696.) In be put forward for the first time non-rigid can by several rigid shape baselines weight form it is assumed that be the non-rigid research side of specifying To.Many scholars were all based on the hypothesis of Bregler when studying non-rigid later.Wherein, Torresani is in paper《With Order constraint is tracked and simulation non-rigid object》(Torresani L.et al.Tracking and modeling non-rigid objects with rank constraints[J].CVPR,2001,1:493.) constraint reestablishing of order non-rigid is utilized in Structure and movable information, but have ignored the relevance of object of which movement, poor robustness.In order to solve this problem, Akhtert Et al. delivered in pattern analysis and machine intelligence《Trajectory range:Non-rigid motion restores the dual representation of structure》 (Akhter I,Sheikh Y,Khan S,et al.Trajectory Space:A Dual Representation for Nonrigid Structure from Motion.[J].IEEE Transactions on Pattern Analysis& Machine Intelligence,2010,33(7):Paper 1442-1456.) proposes the three-dimensional reconstruction side based on track base Method, this method think that all characteristic points of image are all moved in a lower-dimensional subspace, i.e., the movement locus of each characteristic point It can be combined and be indicated with a series of track baseline.But the method for Akhtert is to be based on forward projection model, when the object depth of field When cannot ignore compared with camera to object distance, error is larger.
Invention content
It is fast, accidentally to provide a kind of arithmetic speed for the feature that the present invention is based on previous projective reconstruction effects is poor, convergence rate is slow Difference is small and realizes projective reconstruction using column vector constraint and row vector constraint solving depth factor value under perspective projection model Method.
To achieve the goals above, the technical solution adopted in the present invention is to comprise the steps of:
(1) characteristic point data of movement locus can be reacted in being extracted per piece image in image sequence, expression formula isF and P is respectively picture number and feature points;
(2) depth factor that image is solved according to characteristic point data, completes projective reconstruction, specially:
(2.1) assume that camera model is perspective projection model, depth factor is initialized as 1, establishes the three-dimensional of characteristic point Homogeneous trajectory coordinates;
(2.2) image sequence matrix is established according to the homogeneous trajectory coordinates of three-dimensional of characteristic point, and SVD decomposition is carried out to it;
(2.3) according to the obtained orthogonal matrix of decomposition, it is corresponding that preceding 3r+1 rows and preceding 3r+1 row in image sequence are solved Projection matrix, r are primitive number;
(2.4) 3r+1 arranges corresponding projection matrix calculating depth factor value before utilizing gained;
(2.5) depth factor for being solved step (2.4) substitutes into step (2.1), repeats step (2.1) and step (2.3) operation solves the projection matrix of the preceding 3r+1 rows of image sequence, and analyzes every traveling row iteration again, again Solve the depth factor in image sequence;
(2.6) the depth factor value for solving step (2.5) again substitutes into step (2.1), and repetition step (2.2)~ (2.5), until analysis gainedValue and last time experimental resultAbsolute value of the difference be less than or equal to ξ, 10-4>=ξ > 0, repeatedly In generation, terminates, and k is iterations, obtains U(k)、D(k)And V(k)Value;
(2.7) U for utilizing step (2.6) to obtain(k)、D(k)And V(k)Value, obtain the result M, M=U of projective reconstruction(k)V(k)D(k), complete the projective reconstruction of image sequence.
J-th of characteristic point of the i-th width image is represented by formula in above-mentioned steps (2.1)
si,j=PiSi,j (1)
Wherein:si,j=[xi,j,yi,j,1]TThe characteristic point homogeneous coordinates extracted by step (1), PiFor 3 × 4 projection square Battle array, Si,j=[Xi,j,Yi,j,Zi,j,1]TFor the three-dimensional homogeneous coordinates of j-th of characteristic point of the i-th width image, Xi,j、Yi,jAnd Zi,jPoint It is not the three-dimensional track x of j-th of characteristic pointj、yjAnd zjIn the i-th row value, axn,j, ayn,j, azn,jIt is coefficient,It is mark base vector, i=1 ..., F, F is step (1) The amount of images extracted, j=1 ..., P, P are the characteristic point sum of step (1) all images, and r is primitive number,
It can show that the homogeneous trajectory coordinates expression formula of three-dimensional of all characteristic points is by formula (1):
Wherein, Ax, AyAnd AzIt indicates respectively as follows:
Above-mentioned steps (2.2) are specifically:By image sequence matrix M'3F×PIt expresses, then M'3F×P=P3F×4FS4F×P, profit The matrix M&apos known to formula (2);3F×POrder be 3r+1, i.e., to M'3F×PSVD decomposition is carried out, then is had
M'3F×P=U'3F×3FV'3F×PD'P×P (3)
In formula:U'3F×3FAnd D'P×PFor just Battle array is handed over,For diagonal matrix, U U'3F×3FPreceding 3r+1 row, V is V'3F×PThe submatrix of the upper left corner (3r+1) × (3r+1), D D'P×PPreceding 3r+1 rows, λi,jFor depth factor, i=1 ..., F, J=1 ..., P.
Above-mentioned steps (2.3) are specifically:The projection matrix T of the U of all images in image sequence is found out by formula (3)3F×3FWith The projection matrix T&apos of D;P×P, wherein T3F×3F=I-U (UTU)-1UT, TP'×P=I-DT (DDT)-1D, I are unit matrixs;
Above-mentioned steps (2.4) are specifically:Utilize the projection matrix T obtained by step (2.3)3F×3F, calculated by following formula (4) Obtain depth factor λ1,j,…,λF,jValue,
Wherein j=1 ..., P.
The calculation formula of iterative analysis involved by above-mentioned steps (2.5) is:
Wherein i=1 ..., F.
The projective reconstruction method based on track base of the present invention is the advantage that can be pre-defined using track base, it is assumed that non- Rigid body is made of several track bases, recycles the characteristic of image array low-rank, carries out singular value decomposition, the phase that the present invention uses Machine model is perspective model, and when solving depth factor, solution is carried out at the same time with row vector constraint and column vector constraint, final real Existing projective reconstruction, compared with prior art, the advantageous effect of this method is:
1) present invention utilizes the characteristics of track base, has pre-defined small echo track base, has reduced unknown number, improve operation The robustness of speed and algorithm ensures that fast convergence rate, re-projection effect are good.
2) present invention carries out row vector analysis again on the basis of being analyzed based on column vector, to solve depth factor, Ensure error smaller, it is as a result more accurate.
3) in reconstruction process, it is assumed that camera is perspective model, is more tallied with the actual situation, and is directed to the perspective model knot Close the computational methods of the present invention so that rebuild effect closer to true, error smaller.
Description of the drawings
Fig. 1 is the flow chart of the projective reconstruction based on track base in embodiment 1.
Fig. 2 is dancer's back view used in embodiment 1.
Fig. 3 is the result of label dancer back side characteristic point in embodiment 1.
Fig. 4 is the result for the dancer back side projective reconstruction realized in embodiment 1.
Fig. 5 is dancer's side view used in embodiment 1.
Fig. 6 is the result that dancer's lateral feature point is marked in embodiment 1.
Fig. 7 is the result for the dancer side projective reconstruction realized in embodiment 1.
Fig. 8 is the v realized in embodiment 23r+2With picture noise variation diagram.
Fig. 9 is the v realized in embodiment 23r+2With iterations variation diagram.
Specific implementation mode
In conjunction with drawings and examples, the present invention will be described, but the present invention is not limited only to following implementation situations.
Embodiment 1
The dance video sequences in Cameron University laboratory are utilized herein, are converted thereof into image sequence and are tested. In order to react motion conditions, when selecting characteristic point, selection can reflect arm and the shank position of motion conditions.
The method according to the invention carries out projective reconstruction, and flow chart is as shown in Figure 1, be as follows:
(1) the feature points of movement locus can be reacted during extraction is per piece image in image sequence (such as Fig. 2 and 5) According to as shown in figs. 3 and 6, being embodied asF and P is respectively picture number and feature points;
(2) depth factor that image is solved according to characteristic point data, completes projective reconstruction, and specific implementation step is:
(2.1) assume that camera model is perspective projection model, depth factor is initialized as 1, for the of the i-th width image J characteristic point expression formula be:
si,j=PiSi,j (1)
Wherein:si,j=[xi,j,yi,j,1]TThe characteristic point homogeneous coordinates extracted by step (1), PiFor 3 × 4 projection matrix, Si,j=[Xi,j,Yi,j,Zi,j,1]TFor the three-dimensional homogeneous coordinates of j-th of characteristic point of the i-th width image, Xi,j、Yi,jAnd Zi,jRespectively It is the three-dimensional track x of j-th of characteristic pointj、yjAnd zjIn the i-th row value, axn,j, ayn,j, azn,jIt is coefficient,It is mark base vector, i=1 ..., F, the figure that F is extracted by step (1) As quantity, j=1 ..., the characteristic point sum that P, P are step (1) all images, r is primitive number,
The homogeneous trajectory coordinates expression formula of three-dimensional that all characteristic points are obtained by formula (1) is:
Wherein, Ax, AyAnd AzIt can indicate as follows respectively:
F and P is respectively amount of images 250 and feature points 48, primitive number r=9, mark base vector σ in the present embodimentnWith Discrete cosine transform expression, axn,j, ayn,j, azn,jIt is the numerical value randomly generated;
(2.2) image sequence is established according to the homogeneous trajectory coordinates of three-dimensional of characteristic point, by image sequence matrix M'3F×PCome Expression, M'3F×P=P3F×4FS4F×P, utilize matrix M&apos known to formula (2);3F×POrder be 3r+1, you can to M'3F×PCarry out SVD points Solution, then
M'3F×P=U'3F×3FV'3F×PD'P×P (3)
In formula:U'3F×3FAnd D'P×PFor just Battle array is handed over,For diagonal matrix, U U'3F×3FPreceding 3r+1 row, V is V'3F×PThe submatrix of the upper left corner (3r+1) × (3r+1), D D'P×PPreceding 3r+1 rows, λi,jFor depth factor, i=1 ..., F, J=1 ..., P.
(2.3) U in image sequence and D is determined by above formula (3), solves preceding 3r+1 row and preceding 3r in the image sequence Projection matrix T corresponding to+1 row3F×3FAnd T'P×P, wherein T3F×3F=I-U (UTU)-1UT, T'P×P=I-DT(DDT)-1D, I are single Bit matrix;
(2.4) the projection matrix T obtained by step (2.3)3F×3F, depth factor is calculated using following formula (4) λ1,j,…,λF,j, wherein j=1 ..., P;
Assuming that by T3F×3FIt is expressed as
Formula (4) are unfolded, then are had
Above formula is arranged:
Note that the l=1 in formula, 2 ..., 3F, with regard to M'3F×PIn jth row for, only F unknown number (i.e. λ1,j,…, λF,j), 3F equation can be arranged, therefore can linearly solve λ1,j,…,λF,j, F is amount of images 250 in the present embodiment;
(2.5) the depth factor λ required by step (2.4) is utilized1,j,…,λF,jThe depth for being 1 instead of step (2.1) initial value Spend the factor, the T&apos that recycle step (2.3) is found out;P×PIt is iterated analysis, the calculation formula of iterative analysis is:
Assuming that T'P×PIt is expressed asFormula (5) can be expressed as
λ'i,1xi,1t1,j+...+λ'i,Pxi,PtP,j+λ'i,1yi,1t1,j+...λ'i,Pyi,PtP,j+λ'i,1t1,j+...λ'i,PtP,j =0
Above formula is arranged, can be obtained
λ'i,1t1,j(xi,1+yi,1+1)+...+λ'i,PtP,j(xi,P+yi,P+ 1)=0
I=1 in formula ..., F, with regard to M'3F×PThe i-th row for, only P unknown number can list F equation, F is more than P under normal circumstances, you can finds out depth factor λ 'i,1,...,λ'i,PValue, F and P is respectively picture number in the present embodiment Amount 250 and feature points 48;
(2.6) step (2.5) is obtained into λ 'i,1,...,λ'i,PIt substitutes into step (2.1), repetition step (2.2)~ (2.5), untilWith last time experimental resultAbsolute value of the difference be less than or equal to ξ, 10-4>=ξ > 0, iteration ends, k are repeatedly Generation number, obtains U(k)、D(k)And V(k)Value, iterations k is 100, ξ=10 in the present embodiment-5
(2.7) U for utilizing step (2.6) to obtain(k)、D(k)And V(k)Value, the result of projective reconstruction is M, M=U(k)V(k)D(k), the result of reconstruction is as illustrated in figures 4 and 7.
Embodiment 2
The present embodiment is the reconstruction for being directed to picture noise situation of change, and basic operational steps are same as Example 1, is generated Amount of images F=80 and feature points P=100 are tested, and iterations k is 30, ξ 10-4, and be separately added into the picture Zero-mean, variance are changed to 2 Gaussian noise by 0, rerun respectively 60 times in the case where primitive r is 2,4,6,8 and 10, Then it is averaged, experimental results are shown in figure 8.
From figure 8, it is seen that v3r+2Value with picture noise linear increase, illustrate that this method robustness is preferable, meanwhile, also As can be seen that primitive number is bigger, v3r+2It is worth smaller, the reason is that since matrix is in singular value decomposition, V values are by arranging from big to small Row, therefore, corresponding value are just smaller.
In order to verify the method convergence of the present invention, simulation generates amount of images F=80 and feature points P=100 is carried out Experiment, and it is separately added into zero-mean in the picture, variance is changed to 1.5 Gaussian noise, primitive r=3, as a result such as Fig. 9 institutes by 0 Show.
From fig. 9, it can be seen that v3r+2It with the increase of iterations, can restrain rapidly, illustrate that the method for the present invention restrains Property is good, and the smaller convergence property of noise is better.Convergence rate withRatio it is related, ratio is bigger, and convergence rate is faster. The small situation of noise is small, v3r+2Value it is smaller, ratio is bigger, and convergence rate is faster.
In order to prove advantages of the present invention, re-projection error is usedAnd v3r+2The two values with it is right The method of ratio 1 is compared, wherein primitive r=9.
The step of comparative example 1 (1), is identical as operating procedure (1) of the embodiment of the present invention 1, and step (2) is implemented with the present invention Operating procedure (2.1)~(2.5) of example 1 are identical.
Acquired results are as shown in table 1.
Table 1 is the Comparative result of embodiment 1 and comparative example 1
From table 1 it follows that the re-projection error and v of the method for the present invention3r+2Value than comparative example 1 do not utilize row vector The value smaller of method differs an order of magnitude, therefore its reconstruction effect is more preferable.
The projective reconstruction method based on track base of the present invention is applicable not only to human body motion track, and being further adapted for other can Reflect all scenario of movement locus, such as dynamic scene reconstruction etc..The content not being described in detail in the above content is in the industry Content known to technical staff.

Claims (3)

1. a kind of projective reconstruction method based on track base, it is characterised in that comprise the steps of:
(1) it can reflect that the characteristic point data of movement locus, expression formula are in being extracted per piece image in image sequenceF and P is respectively picture number and feature points;
(2) depth factor that image is solved according to characteristic point data, completes projective reconstruction, specially:
(2.1) assume that camera model is perspective projection model, depth factor is initialized as 1, the three-dimensional for establishing characteristic point is homogeneous Trajectory coordinates;J-th of characteristic point of the i-th width image is represented by formula
si,j=PiSi,j (1)
Wherein:si,j=[xi,j,yi,j,1]TBy the homogeneous coordinates for the characteristic point that step (1) is extracted, PiFor 3 × 4 projection square Battle array, Si,j=[Xi,j,Yi,j,Zi,j,1]TFor the three-dimensional homogeneous coordinates of j-th of characteristic point of the i-th width image, Xi,j、Yi,jAnd Zi,jPoint It is not the three-dimensional track x of j-th of characteristic pointj、yjAnd zjIn the i-th row value, axn,j, ayn,j, azn,jIt is coefficient, σn=(σ1,n … σF,n)TIt is mark base vector, i=1 ..., F, F is step (1) amount of images extracted, j=1 ..., P, P are the characteristic point sum of step (1) all images, and r is primitive number,
It can show that the homogeneous trajectory coordinates expression formula of three-dimensional of all characteristic points is by formula (1):
Wherein, Ax, AyAnd AzIt indicates respectively as follows:
(2.2) image sequence matrix is established according to the homogeneous trajectory coordinates of three-dimensional of characteristic point, and SVD decomposition is carried out to it;
By image sequence matrix M'3F×PIt expresses, then M'3F×P=P3F×4FS4F×P, utilize matrix M&apos known to formula (2);3F×P's Order is 3r+1, i.e., to M'3F×PSVD decomposition is carried out, then is had
M′3F×P=U '3F×3FV′3F×PD′P×P (3)
In formula:U'3F×3FAnd D'P×PFor orthogonal matrix, V'3F×P=diag (v1 v2 … vt), t=min (3F, P) is diagonal matrix, U U'3F×3FPreceding 3r+1 row, V V'3F×PUpper left The submatrix of angle (3r+1) × (3r+1), D D'P×PPreceding 3r+1 rows, λi,jFor depth factor, i=1 ..., F, j=1 ..., P;
(2.3) orthogonal matrix obtained according to decomposition solves the preceding 3r+1 rows of image sequence and the projection that preceding 3r+1 row are corresponding Matrix, r are primitive number;The projection matrix T of U in image sequence is found out by formula (3)3F×3FWith the projection matrix T&apos of D;P×P, wherein T3F×3F=I-U (UTU)-1UT, T'P×P=I-DT(DDT)-1D, I are unit matrixs;
(2.4) 3r+1 arranges corresponding projection matrix calculating depth factor value before utilizing gained;
(2.5) the depth factor substitution step (2.1) solved step (2.4), repetition step (2.1) and step (2.3) Operation solves the projection matrix of preceding 3r+1 rows in image sequence, and analyzes every traveling row iteration again, solves figure again As the depth factor in sequence;
(2.6) the depth factor value for solving step (2.5) again substitutes into step (2.1), repeats step (2.2)~(2.5), Until analysis gainedValue and last time experimental resultAbsolute value of the difference be less than or equal to ξ, 10-4>=ξ > 0, iteration ends, K is iterations, obtains U(k)、D(k)And V(k)Value;
(2.7) U for utilizing step (2.6) to obtain(k)、D(k)And V(k)Value, obtain the result M, M=U of projective reconstruction(k)V(k)D(k), complete the projective reconstruction of image sequence.
2. the projective reconstruction method according to claim 1 based on track base, it is characterised in that:Step (2.4) tool Body is:Utilize the projection matrix T obtained by step (2.3)3F×3F, depth factor λ is calculated by following formula (4)1,j,…,λF,j's Value,
Wherein j=1 ..., P.
3. the projective reconstruction method according to claim 1 based on track base, it is characterised in that:Step (2.5) institute The calculation formula for the iterative analysis being related to is:
[λ'i,1si,1 … λ'i,Psi,P]T'P×P=0 (5)
Wherein i=1 ..., F.
CN201510991797.0A 2015-12-25 2015-12-25 A kind of projective reconstruction method based on track base Expired - Fee Related CN105654472B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510991797.0A CN105654472B (en) 2015-12-25 2015-12-25 A kind of projective reconstruction method based on track base

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510991797.0A CN105654472B (en) 2015-12-25 2015-12-25 A kind of projective reconstruction method based on track base

Publications (2)

Publication Number Publication Date
CN105654472A CN105654472A (en) 2016-06-08
CN105654472B true CN105654472B (en) 2018-10-23

Family

ID=56477923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510991797.0A Expired - Fee Related CN105654472B (en) 2015-12-25 2015-12-25 A kind of projective reconstruction method based on track base

Country Status (1)

Country Link
CN (1) CN105654472B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222361A (en) * 2010-04-06 2011-10-19 清华大学 Method and system for capturing and reconstructing 3D model
CN102592308A (en) * 2011-11-30 2012-07-18 天津大学 Single-camera video three-dimensional reconstruction method based on wavelet transformation

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7106910B2 (en) * 1999-10-01 2006-09-12 Intel Corporation Color video coding scheme

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102222361A (en) * 2010-04-06 2011-10-19 清华大学 Method and system for capturing and reconstructing 3D model
CN102592308A (en) * 2011-11-30 2012-07-18 天津大学 Single-camera video three-dimensional reconstruction method based on wavelet transformation

Also Published As

Publication number Publication date
CN105654472A (en) 2016-06-08

Similar Documents

Publication Publication Date Title
Xia et al. A survey on human performance capture and animation
Li et al. Realtime facial animation with on-the-fly correctives.
Dontcheva et al. Layered acting for character animation
Wang et al. Video-based hand manipulation capture through composite motion control
CN103733226B (en) Quickly there is the tracking of joint motions
Ho et al. Interactive partner control in close interactions for real-time applications
Jin et al. Interactive control of large-crowd navigation in virtual environments using vector fields
US20100156912A1 (en) Motion synthesis method
Henry et al. Environment-aware real-time crowd control
Holden et al. Learning inverse rig mappings by nonlinear regression
Sousa et al. Humanized robot dancing: humanoid motion retargeting based in a metrical representation of human dance styles
Cui et al. The method of dance movement segmentation and labanotation generation based on rhythm
CN105069829A (en) Human body animation generation method based on multi-objective video
Mousas et al. Data-driven motion reconstruction using local regression models
Ciccone et al. Authoring motion cycles
Boukhayma et al. Surface motion capture transfer with gaussian process regression
Hyun et al. Tiling motion patches
CN105654472B (en) A kind of projective reconstruction method based on track base
Vogt et al. A data-driven method for real-time character animation in human-agent interaction
CN115294228B (en) Multi-figure human body posture generation method and device based on modal guidance
Liu et al. Motion improvisation: 3d human motion synthesis with a transformer
Otberdout et al. Hand pose estimation based on deep learning depth map for hand gesture recognition
Kim et al. Realtime performance animation using sparse 3D motion sensors
Zhi et al. Learning from demonstration via probabilistic diagrammatic teaching
Liu et al. 2.5 D human pose estimation for shadow puppet animation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20181023

Termination date: 20211225