CN100468465C - Stereo vision three-dimensional human face modelling approach based on dummy image - Google Patents

Stereo vision three-dimensional human face modelling approach based on dummy image Download PDF

Info

Publication number
CN100468465C
CN100468465C CNB2007100237784A CN200710023778A CN100468465C CN 100468465 C CN100468465 C CN 100468465C CN B2007100237784 A CNB2007100237784 A CN B2007100237784A CN 200710023778 A CN200710023778 A CN 200710023778A CN 100468465 C CN100468465 C CN 100468465C
Authority
CN
China
Prior art keywords
image
stereographic map
point
face
virtual image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2007100237784A
Other languages
Chinese (zh)
Other versions
CN101101672A (en
Inventor
汪增福
郑颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Science and Technology of China USTC
Original Assignee
University of Science and Technology of China USTC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Science and Technology of China USTC filed Critical University of Science and Technology of China USTC
Priority to CNB2007100237784A priority Critical patent/CN100468465C/en
Publication of CN101101672A publication Critical patent/CN101101672A/en
Application granted granted Critical
Publication of CN100468465C publication Critical patent/CN100468465C/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

It uses a 3D model of reference human face (HF) to estimate the HF gesture parameters in the input stereo image (ISI) pair to generate the virtual image (VI) pair of reference HF (RHF) under the same gesture. Due to the corresponding between VIs is known, the corresponding is extended from VI to ISI via calculating the correspondence between the input HF and RHF under the same gesture. This greatly improves the matching effect of HF stereo image pair and ensures the HF point-cloud (PC) data more accurate obtained from calculating stereovision difference. This method converts 3D PD optimizing problem into a 1D curve optimizing one to greatly raise processing speed. It finally builds HF 3D surface from the optimized PC, then texture-mapping it to gain a HF 3D model (H3M) with a high true sense. This invention can meet various requirements against H3M under various practical application situations. It is practical, fast, accurate and has a extensive application foreground.

Description

Stereo vision three-dimensional human face modeling method based on the virtual image correspondence
Technical field:
The invention belongs to Flame Image Process and technical field of computer vision, be specifically related to use the method that stereoscopic vision is set up human face three-dimensional model.
Background technology:
At present, the people's face three-dimensional modeling method based on image has obtained extensive and deep research." computer utility " magazine 2000 the 20th volumes the 7th phase 41-45 page or leaf and 255-257 page or leaf have proposed to utilize method positive and side two width of cloth quadrature facial images generation human face three-dimensional model, and the deficiency of this method is the orthogonality of input picture is had the requirement of comparison strictness." realizing the method that fast human-face model is rebuild by a width of cloth direct picture " that Chinese patent application number 200610088857.9 " based on the fast human face model building and the systems of single photo " that propose and 200610024720.7 propose simplifies this method, only use direct picture wherein to carry out the three-dimensional face modeling, but because it does not obtain the real depth information of specific people's face, the three-dimensional model of specific people's face of generation is not really accurate.What 1999 the 21st volumes of " institute of electrical and electronic engineers pattern analysis and machine intelligence can be reported " (IEEE Transactions on Pattern Analysis and Machine Intelligence) the 8th phase 690-706 page or leaf was set forth recovers the 3D shape that the degree of depth (Shape from Shading) also can be recovered people's face by piece image from light and shade.But, recovering zones such as eyes that degree of depth method recovered, nose, face from light and shade and tend to occur depression, its depth information is not too correct, so this method should not be directly used in the three-dimensional modeling of people's face.In addition, the method that " visual and computer animation " (Journal of Visualization and ComputerAnimation) calendar year 2001 the 12nd volume the 4th phase 227-240 page or leaf proposes can be set up the three-dimensional model of people's face from video.But the redundant information of video sequence is more, the computation complexity height.The deformation model (Morphable Model) that 2003 the 25th volumes of " institute of electrical and electronic engineers pattern analysis and machine intelligence can be reported " (IEEE Transaction on Pattern Analysis and Machine Intelligence) the 9th phase 1063-1074 page or leaf is proposed adopts the sample in people's face three-dimensional data base to instruct the three-dimensional face modeling by the mode of learning, but this method needs a large amount of three-dimensional face data as the basis, and this is difficult to be met in actual applications.A kind of three-dimensional face modeling method that " image and vision are calculated " (Image and VisionComputing) 2000 the 18th volumes the 4th phase 337-343 page or leaf proposes based on stereoscopic vision, owing in the image corresponding relation computation process of the left and right sides, produced very big polysemy, cause the three-dimensional face model of generation very undesirable, people's face greatly departs from objectives, therefore need to introduce a lot of constraint and priori conditions, in three dimensions model is optimized, this has increased the complexity of modeling work to a great extent.
Summary of the invention:
The present invention is directed to the deficiencies in the prior art, propose a kind of stereoscopic vision method that is applicable to people's face, can set up the three-dimensional model of tested person's face exactly from the stereographic map centering of tested person's face of input based on the virtual image correspondence.
The present invention is based on the stereo vision three-dimensional human face modeling method of virtual image correspondence, comprise the steps:
(1), demarcates the inside and outside parameter of left and right cameras;
(2), utilize the stereographic map of calibrated left and right cameras collection tested person face right, and proofread and correct, the stereographic map after obtaining proofreading and correct is right;
(3), the right corresponding relation of the stereographic map behind the calculation correction;
(4), utilize the right corresponding relation of stereographic map after the correction obtain, set up the initial three-dimensional point cloud of tested person's face, and three-dimensional point cloud be optimized processing;
(5), on the three-dimensional point cloud after the optimization, set up triangular surface patch grid, thus obtain the three-dimensional surface of tested person's face;
(6), utilize the stereographic map of input right, the texture of synthetic three-dimensional model;
It is characterized in that:
The right corresponding relation of stereographic map behind the described calculation correction, specifically comprise: at first, utilize reference man's face three-dimensional model, attitude, size and the position of tested person's face of stereographic map centering after estimating to proofread and correct, the virtual image that generates the reference man face similar to tested person's face pose of proofreading and correct back stereographic map centering is right; Then, behind the calculation correction stereographic map pair and virtual image between corresponding relation; At last, utilize to proofread and correct back stereographic map pair and virtual image between corresponding relation, and virtual image is to the corresponding relation between the left and right image, obtain proofreading and correct back stereographic map between corresponding relation;
Described three-dimensional point cloud is optimized processing, the steps include: at first by a self-defining uniform grid cloud to be sampled, making the invocation point cloud is x, y coordinate with the net point coordinate; Interpolation obtains the z coordinate of each net point correspondence again; Be vernier with the y coordinate then, extract each scan line, promptly a curve in the x-z plane utilizes the running mean window algorithm that the z coordinate figure of curve is carried out smoothing processing;
Attitude, size and the position of tested person's face of back stereographic map centering proofreaied and correct in described estimation, be specially: at first on the two-dimension human face image of stereographic map centering after the correction, locate the unique point more than 6, and demarcate characteristic of correspondence point on reference man's face three-dimensional model in advance; Then set up square distance between the subpoint of three-dimensional model unique point and the two dimensional image unique point and cost function, this cost function is parameter with the projection matrix; Obtain the optimal value of these parameters at last by nonlinear optimization;
Behind the described calculation correction stereographic map pair and virtual image between corresponding relation, the steps include: stereographic map after the correction on and be that node is formed triangle gridding with the human face characteristic point on the virtual image; According to triangle corresponding on image after the correction and virtual image, set up the affine relation between per two corresponding triangles; Judge and proofread and correct certain a bit affiliated triangle on the image of back, and definite this triangle residing position on virtual image, affine transformation relationship by between these two triangles is mapped to this point on the virtual image, thereby obtains its corresponding point on virtual image;
Described utilization proofread and correct back stereographic map pair and virtual image between corresponding relation, and virtual image is to the corresponding relation between the left and right image, obtain proofreading and correct the back stereographic map between corresponding relation, the steps include: at first from proofreading and correct the left image of back stereographic map centering, according to proofread and correct the back stereographic map to left image and virtual image to the corresponding relation between the left image, obtain proofreading and correct back stereographic map to the some Pl on the left image at virtual image to the corresponding point Pl ' on the left image; Then by virtual image between corresponding relation, obtain virtual image to the corresponding point Pl ' on the left image at virtual image to the corresponding point Pr ' on the right image; At last by virtual image to right image and proofread and correct the back stereographic map to the corresponding relation between the right image, obtain virtual image to the corresponding point Pr ' on the right image at stereographic map after the correction to the corresponding point Pr on the right image, then proofreading and correct the back stereographic map is exactly desired corresponding point of proofreading and correct the back stereographic map to the some Pl on the left image to the corresponding point Pr on the right image; Check is proofreaied and correct the back stereographic map whether the some Pl on the left image and its is satisfied the polar curve constraint at stereographic map after the correction to the corresponding point Pr on the right image, and promptly whether the y coordinate figure of Pl and Pr equates, or differ less than a certain threshold values, to reject incorrect corresponding result.
Described reference man's face three-dimensional model both can be a general human face three-dimensional model, also can be a general faceform to be carried out the initial estimation result to tested person's face three-dimensional model that obtains after the simple deformation according to proofreading and correct the right information of back stereographic map.
Compared with prior art, the present invention is based on the stereo vision three-dimensional human face modeling method of virtual image correspondence, owing to adopted the virtual image that generates by reference man's face three-dimensional model auxiliary to conduct, correspondence problem between the existing stereo vision matching method different visual angles human face to be solved image is converted into facial image under the same visual angle that has maturation method and the correspondence problem between its virtual facial image, thereby overcome the existing polysemy that in the corresponding computation process of people's face left and right sides image, exists based on the three-dimensional modeling method of stereoscopic vision, make the coupling of corresponding point calculate accurately, obtain reliable parallax information thus.
Owing to parallax information has been arranged accurately and reliably as the basis, the present invention adopts the stereo vision three-dimensional rebuilding algorithm to obtain the accurate three-dimensional data of tested person's face, can only recover the shortcoming that tested person's face is similar to three-dimensional information thereby overcome to have now, reach the three-dimensional face modeling effect of high realism based on the single width direct picture and from the method for the light and shade recovery degree of depth.
Owing to adopt stereoscopic vision as the three-dimensional measurement means, the present invention only need utilize two camera acquisitions, two width of cloth people face stereo-pictures just can finish the work of people's face three-dimensional modeling, and in the use, two video cameras and separate unit video camera do not have essential distinction, thereby overcome based on the requirement of the method for orthogonal image to the input picture orthogonality, overcome shortcoming based on the method data redundancy of video sequence, the method that has overcome deformation model needs a large amount of three-dimensional datas, the shortcoming that is difficult to practical application, therefore method of the present invention has very high practicality.
The present invention solves the optimization of profile problem that the some cloud optimization problem of three-dimensional is converted into one dimension, thereby has overcome the existing loaded down with trivial details shortcoming of three-dimensional modeling method model optimization work based on stereoscopic vision, has improved the modeling processing speed greatly.
In sum, the present invention adopts the technology based on the virtual image correspondence to solve the problem of people's face stereographic map to left and right sides image corresponding relation dyscalculia, thereby the method that can utilize stereoscopic vision obtains people's face three dimensional point cloud accurately, and has obtained the three-dimensional model of high realism by simple one dimension optimization of profile.The three-dimensional face modeling method that the present invention proposes has accurately, advantage easily, can satisfy the demand of various practical application to three-dimensional face model, has broad application prospects.
Description of drawings:
Fig. 1 is the schematic flow sheet that the present invention is based on the stereo vision three-dimensional human face modeling method of virtual image correspondence.
Fig. 2 is the typical modes of emplacement synoptic diagram of stereo camera.
Fig. 3 is the stereo visual system synoptic diagram of standard configured in parallel.
Fig. 4 is the corresponding synoptic diagram that calculates of non-unique point.
Fig. 5 is the three-dimensional model of tested person's face of Konica Minolta VIVID 910 three-dimensional laser imagers collection.
Fig. 6 the present invention is based on tested person's face three-dimensional model example that the stereo vision three-dimensional human face modeling method of virtual image correspondence is set up.
Embodiment:
Embodiment 1:
Fig. 1 has provided the idiographic flow of the stereo vision three-dimensional human face modeling method that the present invention is based on the virtual image correspondence.At first two video cameras about using are demarcated a1 and a2 respectively, then take the left and right sides image of tested person's face, and utilizing the result of camera calibration respectively image to be proofreaied and correct b1 and b2, the corresponding relation that next carries out the right left and right sides image of stereographic map calculates c.The concrete steps that this corresponding relation calculates c are: at first utilize reference man's face three-dimensional model, estimate attitude, size and the position of tested person's face in the image of the input left and right sides, thereby the virtual image that generates the reference man face similar with importing stereographic map centering tested person face pose is to c1; Then, calculate input stereographic map pair and virtual image between corresponding relation c2; At last, utilize input stereographic map pair and virtual image between corresponding relation, and virtual image is to the corresponding relation between the left and right image, obtain importing stereographic map between corresponding relation c3; After having obtained correspondence, just can recover the three-dimensional point cloud information of corresponding point according to the three-dimensional reconstruction algorithm of stereoscopic vision, and further three-dimensional point cloud is optimized d, set up the three-dimensional surface e of people's face then the cloud data after optimizing, the facial image of input can be used as the texture f of three-dimensional model, like this, set up a complete three-dimensional face model g.
Introduce embodiment below in detail.
Fig. 2 is the typical modes of emplacement synoptic diagram of employed stereo camera among the present invention: left video camera C 1With right video camera C 2Placement be as the criterion can photograph the whole of tested person's face simultaneously.A scene point P in the space forms picture point P respectively on the left and right plane of delineation 1With picture point P 2The picture point P of the same space scene point P on the left plane of delineation 1With the picture point P on the right plane of delineation 2Be called corresponding point.The method of stereoscopic vision is exactly to pass through the three-dimensional information of the information reconstructed scene point of left and right sides image corresponding point.For this reason, the relation between the image coordinate of the three-dimensional coordinate of needs understanding scene point and picture point thereof.With the left video camera among Fig. 1 is that example describes: in Fig. 2 synoptic diagram, and left video camera C 1Comprise three coordinate systems: world coordinate system O wX wY wZ w, camera coordinate system O 1X 1Y 1Z 1With image coordinate system OU 1V 1If the coordinate of scene point P under world coordinate system is (X w, Y w, Z w), the coordinate under camera coordinate system is (X c, Y c, Z c), with and subpoint P 1Image coordinate be (u v), then according to the perspective projection principle, has following relation between these three kinds of coordinates:
Z c u v 1 = α x 0 u 0 0 α y v 0 0 0 1 X c Y c Z c 1 = α x 0 u 0 0 α y v 0 0 0 1 R t 0 T 1 X w Y w Z w 1 = M 1 M 2 X w Y w Z w 1
Intrinsic parameters of the camera matrix M wherein 1Fully by x axle pixel dimension factor alpha x, y axle pixel dimension factor alpha y, and camera optical axis and plane of delineation intersection point (u 0, v 0) determine because these parameters are only relevant with the video camera internal structure, therefore be called as intrinsic parameters of the camera; Video camera external parameter matrix M 2Fully by rotation parameter R and the translation parameters t decision of video camera with respect to world coordinate system.Determine the process of the inside and outside parameter of a certain video camera, be called camera calibration.Demarcation has a lot of very ripe methods, and books that can the reference computers visual correlation also can be with reference to following network address http://www.vision.caltech.edu/bouguetj/calib_doc/.
Because the placement of two video cameras is not parallel when taking, and in order to simplify subsequent calculations, need obtain the image of taking under standard parallel shafts stereoscopic vision.This can utilize the camera interior and exterior parameter that demarcate to obtain, and the mode by image rectification realizes.The method of proofreading and correct also has a lot, can roll up the method that the 4th phase 16-22 page or leaf is proposed in 2000 the 12nd with reference to " machine vision and application " (Machine Vision andApplications).
After overcorrect, just be converted to the image of under standard parallel shafts stereoscopic vision, taking at the image of taking under original non-standard stereo visual system.Figure 3 shows that standard parallel shafts stereo visual system.In this stereo visual system, left video camera C 1With right video camera C 2Each inner parameter equate that and the optical axis of two video cameras is parallel to each other, the x axle overlaps mutually.Two camera coordinate systems only differ the translation on the x direction of principal axis, and this translation distance is called baseline (baseline) length, are designated as b.Left side intersection E 1With right intersection E 2Be spatial field sight spot P, left plane of delineation initial point O 1With right plane of delineation initial point O 2The plane P O that forms 1O 2Respectively with left plane of delineation I 1With right plane of delineation I 2Intersection because two planes of delineation are positioned at same plane, therefore left intersection E 1With right intersection E 2Conllinear, and parallel with the x axle.In stereo visual system, left intersection E 1With right intersection E 2Be called picture point P 2With picture point P 1Polar curve (epipolarline), certain any corresponding point must drop on its polar curve on the image.In fact, in the stereo visual system of configured in parallel, the polar curve constraint qualification a bit should equate that with the y axial coordinate of its corresponding point this just provides convenience for the calculating of corresponding point on the image.Under the configuration of standard parallel shafts stereo visual system, establishing the coordinate of spatial point P under left camera coordinate system is (x 1, y 1, z 1), then its coordinate under right camera coordinate system is (x 1-b, y 1, z 1), establish the left picture point P of spatial point P 1With right picture point P 2Image coordinate under left and right image coordinate system is respectively (u 1, v 1) and (u 2, v 2), and suppose that left camera coordinate system is exactly a world coordinate system, be easy to solve the coordinate (x of spatial point P 1, y 1, z 1) be:
x 1 = b ( u 1 - u 0 ) u 1 - u 2 , y 1 = ba x ( v 1 - v 0 ) a y ( u 1 - u 2 ) , z 1 = ba x u 1 - u 2
X axle pixel dimension factor alpha in the formula x, y axle pixel dimension factor alpha y, camera optical axis and plane of delineation intersection point (u 0, v 0), all can obtain by demarcation.This shows that in through the configured in parallel stereo visual system behind the image rectification, the key of reconstruction of three-dimensional scene is to seek the corresponding relation of left and right sides image to obtain parallax information u 1-u 2
Surface area is covered by skin because the people is bold, and skin reflectivity changes comparatively evenly slowly, be not easy to extract feature, and stereographic map is widely different to the attitude of people's face in the image of the left and right sides, and this brings very big difficulty just for corresponding calculating.For this reason, need to introduce the three-dimensional model of reference man's face, the virtual image that generates reference man's face is right, calculates with the right correspondence of auxiliary input stereographic map.Here employed reference man's face three-dimensional model, it both can be a general human face three-dimensional model, also can be general faceform simply to be out of shape according to the information of input facial image, thus the result who obtains to tested person's face three-dimensional model initial estimation.
Below people's face of stereographic map centering of input is called tested person's face.The stereographic map of input makes it to be converted to image in the standard parallel shafts stereo visual system to overcorrect.At first need to estimate position, size and the attitude of tested person's face in the image of the left and right sides, right with the virtual image that generates the similar reference man's face of pose with it.For this reason, the unique point of location more than 6 on the two-dimension human face image of input, these unique points can obtain automatically by characteristic point positioning method cited below; And demarcate characteristic of correspondence point on the three-dimensional model in advance.The pose results estimated should make the unique point on reference man's face three-dimensional model project on the two dimensional image with tested person's face left and right sides image on unique point the most approaching.If the three-dimensional model characteristic point coordinates be x=(x, y, z) T, it projects to the picture point p=(p of two dimensional image plane x, p y) TObtain by following formula:
p = f 1 0 0 0 1 0 cos θ - sin θ 0 sin θ cos θ 0 0 0 1 cos γ 0 sin γ 0 1 0 - sin γ 0 cos γ 1 0 0 0 cos φ - sin φ 0 sin φ cos φ x + t 2 d
Used weak perspective projection transformation in the formula, zoom factor f, the angle φ that people's face rotates around the x axle, around the angle γ of y axle rotation and the angle θ that rotates around the z axle, and two-dimension translational vector t 2d=(t x, t y) TIt is pose parameter to be determined.Can be with distance between subpoint and the two dimensional character point and the form of being write as cost function, that is:
E = Σ j | | q j - p j | | 2
Two dimensional character point q wherein j=(q X, j, q Y, j) TBe the unique point of on input picture, locating, two-dimensional projection's point p jBe the picture point that corresponding three-dimensional feature spot projection obtains to the plane of delineation, cost function E is about zoom factor f, the angle φ that people's face rotates around the x axle, and around the angle γ of y axle rotation and the angle θ that rotates around the z axle, and two-dimension translational vector t 2d=(t x, t y) TNonlinear function, can adopt Le Wenboge-Ma Kuite (Levenberg-Marquardt) method to be optimized.When cost function E got minimum value, the attitude of the attitude of reference man's face and tested person's face was the most approaching.
Because virtual image is to being to be generated by known reference man's face three-dimensional model, so the corresponding relation of its left and right sides image is accurately known: the picture point of same three-dimensional vertices on the virtual image of the left and right sides corresponds to each other.For stereographic map that known virtual image correspondence is expanded to input to last, the left and right image of input need be mated with virtual left and right image respectively, this finishes by anamorphose.
The process need of anamorphose is by means of tested person's face and reference man characteristic of correspondence point on the face.At first, locate facial unique point respectively on the image of the left and right sides of tested person's face, these unique points generally are positioned at face organ's profile.The method of positioning feature point has a lot, and what adopt in the present embodiment is the method for active shape model (Active shape model).Active shape model is a kind of characteristic point positioning method based on statistics, and it is learnt people's face shape variation of sample space representative, obtains corresponding shape, is used to instruct the location of unique point.Concrete steps about the training of active shape model and search characteristics point can be with reference to the paper of " computer vision and image understanding " (Computer Vision andImage Understanding) nineteen ninety-five the 61st volume the 1st phase 38-59 page or leaf.It should be noted that since the input facial image that uses through overcorrect, according to the polar curve constraint of configured in parallel stereo visual system, the y coordinate figure of the corresponding point on the image of the left and right sides should be identical.Can be according to the y coordinate normalization of this constraint with same characteristic features point on the image of the left and right sides.
The virtual image of reference man's face is demarcated on its three-dimensional model in advance to last characteristic of correspondence point, and by projection process, its characteristic of correspondence spot projection on virtual image also can automatically obtain.
By distinguishing feature point for calibration, can think that the last identical unique point of facial image (left image or right image) under the same visual angle is corresponding point at tested person's face with on reference to facial image.To set up the corresponding relation of non-unique point on the human face image of same visual angle below exactly.For this reason, be node with the unique point, adopting De Luonei (Delaunay) triangulation to set up triangle gridding on the input facial image with on the virtual facial image respectively.Like this, the non-unique point of on the face each of people all should fall in certain triangle.According to triangle corresponding on input picture and virtual image, set up the affine relation between per two corresponding triangles, to determine the corresponding relation of the inner non-unique point of triangle.
Fig. 4 has provided the corresponding synoptic diagram that calculates of non-unique point: for a certain on the face non-unique point P of input picture people, at first judge the triangle that this point is affiliated, i.e. the triangle of being made up of unique point ABC; Then determine this triangle residing position on virtual image, i.e. triangle A ' B ' C '; By the affine transformation relationship between these two triangles, the some P among the triangle ABC is mapped among the triangle A ' B ' C ' on the virtual image, thereby obtains the corresponding point P ' of some P on virtual image on the input picture.
At left image of input and virtual left image, import right image and virtual right image, and set up after the corresponding relation between the image of the virtual left and right sides, just can utilize these corresponding relations obtain importing left and right sides image between corresponding relation, concrete steps are: at first from importing the piece image of stereographic map centering, be made as left image, the corresponding relation according between left image of input and the virtual left image obtains importing the corresponding point Pl ' of some Pl on virtual left image on the left image; Then by virtual image between corresponding relation, obtain the corresponding point Pr ' of corresponding point Pl ' on the virtual right image on the virtual left image; At last by the corresponding relation between virtual right image and the right image of input, obtain the corresponding point Pr of corresponding point Pr ' on the right image of input on the virtual right image, the corresponding point Pr that then imports on the right image is exactly the corresponding point of the some Pl on the left image of desired input; Whether the some Pl on the left image of check input and its corresponding point Pr on the right image of input satisfy the polar curve constraint, and promptly whether their y coordinate figure equates, or differ less than a certain threshold values, to reject incorrect corresponding result.
Utilize aforesaid matching algorithm to obtain just can calculate parallax information after the corresponding point on the facial image of the left and right sides, and then utilize the three-dimensional point cloud information of three-dimensional reconstruction algorithm recovery people face of the stereo visual system of configured in parallel.
May also exist the error that some mistake couplings cause in the some cloud information that obtains like this, and because the uncontinuity that the matching precision deficiency causes, these factors can cause the three-dimensional model sense of reality of generation lower.In order to set up human face three-dimensional model true to nature, need further optimize a cloud.Optimize and mainly comprise sampling, interpolation and level and smooth three steps.At first, by a self-defining uniform grid, the number of vertex of the three-dimensional face that the density of this grid can make up is as required selected, and a cloud is sampled.The point cloud that obtains like this is x, y coordinate with the coordinate of net point, and obtains the z coordinate of each net point by interpolation.In interpolation process, can remove the wild point (or the dissimilarity of expressing one's surprise, wonder, etc) in the interpolation zone.Because x, y coordinate are uniformly, help line by line the z coordinate being carried out smoothly, specific practice is to be vernier with the y coordinate, extracts each scan line, promptly a curve in the x-z plane utilizes the running mean window algorithm that the z value is carried out smoothly.
The point cloud through after optimizing that obtains so just can be used for setting up complete human face three-dimensional model.Utilize the De Luonei triangulation to set up the triangular surface patch grid of people's face, thereby set up the three-dimensional surface of people's face.Left and right sides facial image after the correction can be used as texture on three-dimensional model.
Fig. 5 is the three-dimensional model of tested person's face of gathering of Konica Minolta VIVID 910 three-dimensional laser imagers, and Fig. 6 has provided tested person's face three-dimensional model example that the stereo vision three-dimensional human face modeling method that the present invention is based on the virtual image correspondence is set up.Because Minolta VIVID 910 three-dimensional laser imagers in Konica have very high three-dimensional measurement precision, so the three-dimensional model of its collection can be used as the true three-dimension model of tested person's face.Tested person's face three-dimensional model that this true three-dimension model and the inventive method are set up compares, and can see: the three-dimensional model that the inventive method is set up has reflected the 3D shape of tested person's face faithfully, can compare favourably with real three-dimensional model; The three-dimensional model that the inventive method is set up is comparatively smooth, has the very high sense of reality.The three-dimensional model that the inventive method is set up is compared with the true three-dimension model, and the average error distance between the three-dimensional point is in 10 millimeters, and most error is the profile border that derives from people's face.The whole three-dimensional face modeling process of the inventive method in the central processing unit frequency be on the personal computer of 2.4GHz the time spent only need half a minute.From this embodiment as can be seen, the present invention is based on the stereo vision three-dimensional human face modeling method of virtual image correspondence, its modeling accuracy height, strong sense of reality, easy to use quick, have broad application prospects.

Claims (1)

1, a kind of stereo vision three-dimensional human face modeling method based on the virtual image correspondence comprises the steps:
(1), demarcates the inside and outside parameter of left and right cameras;
(2), utilize the stereographic map of calibrated left and right cameras collection tested person face right, and proofread and correct, the stereographic map after obtaining proofreading and correct is right;
(3), the right corresponding relation of the stereographic map behind the calculation correction;
(4), utilize the right corresponding relation of stereographic map after the correction obtain, set up the initial three-dimensional point cloud of tested person's face, and three-dimensional point cloud be optimized processing;
(5), on the three-dimensional point cloud after the optimization, set up triangular surface patch grid, thus obtain the three-dimensional surface of tested person's face;
(6), utilize the stereographic map of input right, the texture of synthetic three-dimensional model;
It is characterized in that:
The right corresponding relation of stereographic map behind the described calculation correction, specifically comprise: at first, utilize reference man's face three-dimensional model, attitude, size and the position of tested person's face of stereographic map centering after estimating to proofread and correct, the virtual image that generates the reference man face similar to tested person's face pose of proofreading and correct back stereographic map centering is right; Then, behind the calculation correction stereographic map pair and virtual image between corresponding relation; At last, utilize to proofread and correct back stereographic map pair and virtual image between corresponding relation, and virtual image is to the corresponding relation between the left and right image, obtain proofreading and correct back stereographic map between corresponding relation;
Described three-dimensional point cloud is optimized processing, the steps include: at first by a self-defining uniform grid cloud to be sampled, making the invocation point cloud is x, y coordinate with the net point coordinate; Interpolation obtains the z coordinate of each net point correspondence again; Be vernier with the y coordinate then, extract each scan line, promptly a curve in the x-z plane utilizes the running mean window algorithm that the z coordinate figure of curve is carried out smoothing processing;
Attitude, size and the position of tested person's face of back stereographic map centering proofreaied and correct in described estimation, be specially: at first on the two-dimension human face image of stereographic map centering after the correction, locate the unique point more than 6, and demarcate characteristic of correspondence point on reference man's face three-dimensional model in advance; Then set up square distance between the subpoint of three-dimensional model unique point and the two dimensional image unique point and cost function, this cost function is parameter with the projection matrix; Obtain the optimal value of these parameters at last by nonlinear optimization;
Behind the described calculation correction stereographic map pair and virtual image between corresponding relation, the steps include: stereographic map after the correction on and be that node is formed triangle gridding with the human face characteristic point on the virtual image; According to triangle corresponding on image after the correction and virtual image, set up the affine relation between per two corresponding triangles; Judge and proofread and correct certain a bit affiliated triangle on the image of back, and definite this triangle residing position on virtual image, affine transformation relationship by between these two triangles is mapped to this point on the virtual image, thereby obtains its corresponding point on virtual image;
Described utilization proofread and correct back stereographic map pair and virtual image between corresponding relation, and virtual image is to the corresponding relation between the left and right image, obtain proofreading and correct the back stereographic map between corresponding relation, the steps include: at first from proofreading and correct the left image of back stereographic map centering, according to proofread and correct the back stereographic map to left image and virtual image to the corresponding relation between the left image, obtain proofreading and correct back stereographic map to the some P1 on the left image at virtual image to the corresponding point P1 ' on the left image; Then by virtual image between corresponding relation, obtain virtual image to the corresponding point P1 ' on the left image at virtual image to the corresponding point Pr ' on the right image; At last by virtual image to right image and proofread and correct the back stereographic map to the corresponding relation between the right image, obtain virtual image to the corresponding point Pr ' on the right image at stereographic map after the correction to the corresponding point Pr on the right image, then proofreading and correct the back stereographic map is exactly desired corresponding point of proofreading and correct the back stereographic map to the some P1 on the left image to the corresponding point Pr on the right image; Check is proofreaied and correct the back stereographic map whether the some P1 on the left image and its is satisfied the polar curve constraint at stereographic map after the correction to the corresponding point Pr on the right image, and promptly whether the y coordinate figure of P1 and Pr equates, or differ less than a certain threshold values, to reject incorrect corresponding result.
CNB2007100237784A 2007-07-13 2007-07-13 Stereo vision three-dimensional human face modelling approach based on dummy image Expired - Fee Related CN100468465C (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNB2007100237784A CN100468465C (en) 2007-07-13 2007-07-13 Stereo vision three-dimensional human face modelling approach based on dummy image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNB2007100237784A CN100468465C (en) 2007-07-13 2007-07-13 Stereo vision three-dimensional human face modelling approach based on dummy image

Publications (2)

Publication Number Publication Date
CN101101672A CN101101672A (en) 2008-01-09
CN100468465C true CN100468465C (en) 2009-03-11

Family

ID=39035937

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2007100237784A Expired - Fee Related CN100468465C (en) 2007-07-13 2007-07-13 Stereo vision three-dimensional human face modelling approach based on dummy image

Country Status (1)

Country Link
CN (1) CN100468465C (en)

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101639895B (en) * 2009-08-14 2011-12-21 浙江工业大学 Method for extracting and matching features of computer visual image based on Similarity-Pictorial structural model
CN101887589B (en) * 2010-06-13 2012-05-02 东南大学 Stereoscopic vision-based real low-texture image reconstruction method
CN101916456B (en) * 2010-08-11 2012-01-04 无锡幻影科技有限公司 Method for producing personalized three-dimensional cartoon
CN101930618B (en) * 2010-08-20 2012-05-30 无锡幻影科技有限公司 Method for producing individual two-dimensional anime
CN101916457B (en) * 2010-08-27 2011-11-23 浙江大学 Datum body for acquiring three-dimensional point cloud data and point cloud synthesis method
CN101976453A (en) * 2010-09-26 2011-02-16 浙江大学 GPU-based three-dimensional face expression synthesis method
CN102486868A (en) * 2010-12-06 2012-06-06 华南理工大学 Average face-based beautiful face synthesis method
EP2689396A4 (en) * 2011-03-21 2015-06-03 Intel Corp Method of augmented makeover with 3d face modeling and landmark alignment
CN102163240A (en) * 2011-05-20 2011-08-24 苏州两江科技有限公司 Method for constructing human face characteristic image index database based on MPEG-7 (Motion Picture Experts Group-7) standard
CN102831382A (en) * 2011-06-15 2012-12-19 北京三星通信技术研究有限公司 Face tracking apparatus and method
CN102374860B (en) * 2011-09-23 2014-10-01 奇瑞汽车股份有限公司 Three-dimensional visual positioning method and system
CN102663810B (en) * 2012-03-09 2014-07-16 北京航空航天大学 Full-automatic modeling approach of three dimensional faces based on phase deviation scanning
CN103269423B (en) * 2013-05-13 2016-07-06 浙江大学 Can expansion type three dimensional display remote video communication method
EP3096692B1 (en) * 2014-01-24 2023-06-14 Koninklijke Philips N.V. Virtual image with optical shape sensing device perspective
CN103955961B (en) * 2014-04-14 2017-06-06 中国人民解放军总医院 Based on statistical ultrasonic three-dimensional reconstruction of sequence image method and system
CN105005755B (en) 2014-04-25 2019-03-29 北京邮电大学 Three-dimensional face identification method and system
FR3021443B1 (en) * 2014-05-20 2017-10-13 Essilor Int METHOD FOR CONSTRUCTING A MODEL OF THE FACE OF AN INDIVIDUAL, METHOD AND DEVICE FOR ANALYZING POSTURE USING SUCH A MODEL
EP2966621A1 (en) * 2014-07-09 2016-01-13 Donya Labs AB Method and system for converting an existing 3D model into graphical data
CN104318615B (en) * 2014-10-29 2017-04-19 中国科学技术大学 Vocal organ three-dimensional modeling method
CN106164979B (en) * 2015-07-13 2019-05-17 深圳大学 A kind of three-dimensional facial reconstruction method and system
CN105006020B (en) * 2015-07-14 2017-11-07 重庆大学 A kind of conjecture face generation method based on 3D models
CN106372629B (en) * 2016-11-08 2020-02-07 汉王科技股份有限公司 Living body detection method and device
CN106920276B (en) * 2017-02-23 2019-05-14 华中科技大学 A kind of three-dimensional rebuilding method and system
CN107563338B (en) * 2017-09-12 2021-01-08 Oppo广东移动通信有限公司 Face detection method and related product
CN109785390B (en) * 2017-11-13 2022-04-01 虹软科技股份有限公司 Method and device for image correction
CN108280870A (en) * 2018-01-24 2018-07-13 郑州云海信息技术有限公司 A kind of point cloud model texture mapping method and system
CN108446595A (en) * 2018-02-12 2018-08-24 深圳超多维科技有限公司 A kind of space-location method, device, system and storage medium
CN108648203A (en) * 2018-04-24 2018-10-12 上海工程技术大学 A method of the human body three-dimensional Attitude estimation based on monocular cam
CN108765351B (en) * 2018-05-31 2020-12-08 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN109087340A (en) * 2018-06-04 2018-12-25 成都通甲优博科技有限责任公司 A kind of face three-dimensional rebuilding method and system comprising dimensional information
CN109118579A (en) * 2018-08-03 2019-01-01 北京微播视界科技有限公司 The method, apparatus of dynamic generation human face three-dimensional model, electronic equipment
CN109377445B (en) * 2018-10-12 2023-07-04 北京旷视科技有限公司 Model training method, method and device for replacing image background and electronic system
CN109766896B (en) * 2018-11-26 2023-08-29 顺丰科技有限公司 Similarity measurement method, device, equipment and storage medium
CN109754467B (en) * 2018-12-18 2023-09-22 广州市百果园网络科技有限公司 Three-dimensional face construction method, computer storage medium and computer equipment
CN109859121A (en) * 2019-01-09 2019-06-07 武汉精立电子技术有限公司 A kind of image block bearing calibration and device based on FPGA platform
CN110533777B (en) * 2019-08-01 2020-09-15 北京达佳互联信息技术有限公司 Three-dimensional face image correction method and device, electronic equipment and storage medium
CN110796083B (en) * 2019-10-29 2023-07-04 腾讯科技(深圳)有限公司 Image display method, device, terminal and storage medium
CN110930424B (en) * 2019-12-06 2023-04-18 深圳大学 Organ contour analysis method and device
CN111563959B (en) * 2020-05-06 2023-04-28 厦门美图之家科技有限公司 Updating method, device, equipment and medium of three-dimensional deformable model of human face
CN112734890B (en) * 2020-12-22 2023-11-10 上海影谱科技有限公司 Face replacement method and device based on three-dimensional reconstruction
CN112801001B (en) * 2021-02-05 2021-10-22 读书郎教育科技有限公司 Dull and stereotyped built-in face identification safety coefficient
CN113450460A (en) * 2021-07-22 2021-09-28 四川川大智胜软件股份有限公司 Phase-expansion-free three-dimensional face reconstruction method and system based on face shape space distribution

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404016A (en) * 2002-10-18 2003-03-19 清华大学 Establishing method of human face 3D model by fusing multiple-visual angle and multiple-thread 2D information
CN1940996A (en) * 2005-09-29 2007-04-04 中国科学院自动化研究所 Method for modeling personalized human face basedon orthogonal image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1404016A (en) * 2002-10-18 2003-03-19 清华大学 Establishing method of human face 3D model by fusing multiple-visual angle and multiple-thread 2D information
CN1940996A (en) * 2005-09-29 2007-04-04 中国科学院自动化研究所 Method for modeling personalized human face basedon orthogonal image

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
双摄像机相位测量轮廓术系统标定与数据融合. 李勇,苏显渝,吴庆阳.光学学报,第26卷第4期. 2006 *

Also Published As

Publication number Publication date
CN101101672A (en) 2008-01-09

Similar Documents

Publication Publication Date Title
CN100468465C (en) Stereo vision three-dimensional human face modelling approach based on dummy image
CN111815757B (en) Large member three-dimensional reconstruction method based on image sequence
Furukawa et al. Accurate, dense, and robust multiview stereopsis
Basha et al. Multi-view scene flow estimation: A view centered variational approach
CN102663820B (en) Three-dimensional head model reconstruction method
Musialski et al. A survey of urban reconstruction
Mulayim et al. Silhouette-based 3-D model reconstruction from multiple images
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN109242954B (en) Multi-view three-dimensional human body reconstruction method based on template deformation
US20050140670A1 (en) Photogrammetric reconstruction of free-form objects with curvilinear structures
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN104077804A (en) Method for constructing three-dimensional human face model based on multi-frame video image
Wang et al. Camera calibration and 3D reconstruction from a single view based on scene constraints
CN107862744A (en) Aviation image three-dimensional modeling method and Related product
CN103971404A (en) 3D real-scene copying device having high cost performance
CN104361627B (en) Binocular vision bituminous paving Micro texture 3-D view reconstructing method based on SIFT
Lu et al. A survey of motion-parallax-based 3-D reconstruction algorithms
CN101794459A (en) Seamless integration method of stereoscopic vision image and three-dimensional virtual object
Kim et al. Block world reconstruction from spherical stereo image pairs
Zhu et al. The role of prior in image based 3d modeling: a survey
Feng et al. Semi-automatic 3d reconstruction of piecewise planar building models from single image
Lee et al. Interactive 3D building modeling using a hierarchical representation
Fabbri et al. Camera pose estimation using first-order curve differential geometry
Coorg Pose imagery and automated three-dimensional modeling of urban environments
Wu et al. Photogrammetric reconstruction of free-form objects with curvilinear structures

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C17 Cessation of patent right
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20090311

Termination date: 20110713