CN101866497A - Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system - Google Patents

Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system Download PDF

Info

Publication number
CN101866497A
CN101866497A CN 201010209794 CN201010209794A CN101866497A CN 101866497 A CN101866497 A CN 101866497A CN 201010209794 CN201010209794 CN 201010209794 CN 201010209794 A CN201010209794 A CN 201010209794A CN 101866497 A CN101866497 A CN 101866497A
Authority
CN
China
Prior art keywords
image
point
human face
dimensional
face
Prior art date
Application number
CN 201010209794
Other languages
Chinese (zh)
Inventor
明悦
阮秋琦
Original Assignee
北京交通大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京交通大学 filed Critical 北京交通大学
Priority to CN 201010209794 priority Critical patent/CN101866497A/en
Publication of CN101866497A publication Critical patent/CN101866497A/en

Links

Abstract

The invention discloses a binocular stereo vision based intelligent three-dimensional human face rebuilding method and a system; the method comprises: preprocessing operations including image normalization, brightness normalization and image correction are carried out to a human face image; a human face area in the human face image which is preprocessed is obtained and human face characteristic points are extracted; the object is rebuilt by projection matrix, so as to obtain internal and external parameters of a vidicon; based on the human face characteristic points, gray level cross-correlation matching operators are expanded to color information, and a parallax image generated by stereo matching is calculated according to information including polar line restraining, human face area restraining and human face geometric conditions; a three-dimensional coordinate of a human face spatial hashing point cloud is calculated according to the vidicon calibration result and the parallax image generated by stereo matching, so as to generate a three-dimensional human face model. By adopting the steps, more smooth and vivid three-dimensional human face model is rebuilt in the invention.

Description

Intelligent three-dimensional human face rebuilding method and system based on binocular stereo vision

Technical field

The present invention relates to the technique of binocular stereoscopic vision field, relate in particular to a kind of intelligent three-dimensional human face rebuilding method and system based on binocular stereo vision.

Background technology

In recent years, information and communication technology (ICT) has been dissolved into each department and all parts of our life, opened a beyond example world, here people are mutual with the carrying out of the electronic equipment that is embedded in responsive response user existence, with the intelligent building that provides the user to need is the trend that the computer assisted security system of feature is just becoming domestic research, needs the service of more complexity.Vision is direct, the most general mode that the mankind obtain external information.The final purpose of vision is to make significant explanation of observer and description scene, makes behavior planning based on these explanations and description and according to surrounding environment and observer's wish then.

Computer vision is meant the visual performance with the computer realization mankind, to three-dimensional scenic perception, identification and the understanding of objective world.Computer vision is an important branch of computer science and artificial intelligence.Its research purpose and in have two aspects, the one, with the partial function of computer realization human vision; The 2nd, help to understand the mechanism of human vision thus.

The binocular solid coupling is used widely because of it and has been expanded to many research fields in recent years, as estimation, and the reconstruction of object construction and nearest 3D video coding.In the binocular solid system, the correction stereo-picture is a crucial step to the calculating (or corresponding) of a parallax.In case this step, depth information is reconstruct easily, and then obtain the shape information of people's face accurately and reliably.It is remarkable that but parallax calculates, especially for facial image.Because the smooth diffuse reflection of people's face skin makes people's face have lower texture information.As a result, traditional based on the relevant solid matching method of gray scale may since bluring of corresponding result fail.And the performance of these methods is owing to some factors such as bad illumination or block and degenerate.Up to the present, some attempt being used for handling the right 3D human face rebuilding of stereo-picture.

The initial results that obtains from traditional solid matching method is far different than real surface.As a result, the process that becomes more meticulous is indispensable.Yet this is not only time-consuming but also calculate very costliness, and can't obtain satisfied result under most of situation.

Summary of the invention

The object of the present invention is to provide a kind of intelligent three-dimensional human face rebuilding system and method based on binocular stereo vision.

On the one hand, the invention discloses a kind of intelligent three-dimensional human face rebuilding method, comprise the steps: pre-treatment step, facial image is comprised the pretreatment operation of image normalization, brightness normalization and image rectification based on binocular stereo vision; People's face detects and the feature point extraction step, detects the human face region that obtains through in the facial image of pretreatment operation, and the pedestrian's face characteristic point of going forward side by side extracts; The camera calibration step by projection matrix reconstruct object, obtains the inside and outside parameter of video camera, obtains the camera calibration result; Binocular solid coupling step based on described human face characteristic point, expands to colouring information with gray scale simple crosscorrelation coupling operator, and calculates the disparity map that three-dimensional coupling generates according to the information that comprises polar curve constraint, human face region constraint and people's face geometric condition; The three-dimensional facial reconstruction step according to described camera calibration result and the described three-dimensional three-dimensional coordinate that the disparity map that generates calculates people's face spatial hashing point cloud that mates, generates three-dimensional face model.

Above-mentioned intelligent three-dimensional human face rebuilding method, in the preferred described pre-treatment step, described image normalization is handled and is comprised: the rotation step, described facial image is rotated, make two maintenance levels in the described facial image; The mid point set-up procedure, the mid point of adjusting the eyes line is positioned at the center of picture traverse, and described center is positioned at the fixed position on the height of described facial image; The scale transformation step is carried out scale transformation to described postrotational facial image, obtains standard picture of the same size.

Above-mentioned intelligent three-dimensional human face rebuilding method, in the preferred described pre-treatment step, described brightness normalized comprises: testing image energy calculation procedure makes I that (i, j) expression i is capable, the gray-scale value of j row pixel, the energy of calculating testing image The scale factor calculation step defines an average face, obtains the energy AveryEnergy of described average face; Determine proportional factor r atio according to following formula: Brightness normalization step, foundation Each pixel in the testing image is carried out the brightness normalized.

Above-mentioned intelligent three-dimensional human face rebuilding method, in the preferred described pre-treatment step, the processing of described image rectification comprises: two width of cloth images are carried out the conversion of two dimension respectively; Original image at first passes through perspective transform, has been moved to infinity point in limit, and the polar curve bundle in the image carries out similarity transformation after just having become one group of parallel lines, the transverse axis of each polar curve and image coordinate system; In order to reduce distortion, carry out wrong contact transformation, make the pattern distortion of horizontal direction minimize.

Above-mentioned intelligent three-dimensional human face rebuilding method is in the preferred described pre-treatment step, in described people's face detection and the feature point extraction step, described human face region is determined by colour of skin likelihood, comprise the steps: that a coloured image is transformed into the YCbCr space by rgb space, according to two-dimentional Gauss model G (m, Λ 2) changing a width of cloth coloured image into gray level image, gray-scale value belongs to the possibility of skin area corresponding to this, gets threshold value by adaptive method then, gray level image further can be become bianry image g_ycbcr; Coloured image is transformed into the YIQ space by rgb space, extracts I component,, then be colour of skin point, otherwise be not colour of skin point, cut apart and obtain bianry image g_yiq when the value of the I component of pixel during at 5<I<80; Bianry image g_ycbcr and bianry image g_yiq obtain image g_skin as AND-operation; G_skin makes closure operation to bianry image, and its structural element is 3 * 3 unit matrix; The Minimum Area area of setting people's face is 50 pixels, and filling area is less than the area of skin color of 50 pixels then.

Above-mentioned intelligent three-dimensional human face rebuilding method, in preferred described people's face detection and the feature point extraction step, described human face characteristic point extracts by active shape model and comprises: the eyes of opening or the eyes that closing are used general whole people's face shape template come whole face of initialization, obtain the apparent position of two tail of the eyes; Use local active shape model and estimate the profile of mouth, obtain the mouth true edge that obtains by the Canny operational character to mouth, if eye detection be opening and mouth detect and be O shape mouth, select to widen the view and the whole face template of O shape mouth is searched for whole facial contour.

Above-mentioned intelligent three-dimensional human face rebuilding method is in the preferred described camera calibration step, based on the inside and outside parameter of plane gridiron pattern label calibrating camera.

Above-mentioned intelligent three-dimensional human face rebuilding method, in the preferred described binocular solid coupling step, the described disparity map that calculates three-dimensional coupling generation is realized by choosing with region growing of sub pixel, comprise: sub pixel is chosen step, select edge feature point to carry out region growing as sub pixel, after finishing as the region growing of sub pixel with edge feature point, select a pixel that does not belong to any growth district, in the search window at one dimension under the parallax constraint of edge feature point, utilize the relative coefficient formula to calculate the coupling cost function, choose the most reliable parallax, choosing this point is sub pixel, and establishing this parallax is regional disparity, the region growing step if do not find any match point less than coupling cost thresholding T given in advance, repeats this step to next neighbor pixel then; The region growing step is utilized the parallax value of sub pixel, calculates the coupling cost with sub pixel adjacent pixels point, the pixel that satisfies constraint condition is comprised into sub pixel region, otherwise abandon this point; Disparity map generates step, execution area growth step repeatedly, till not having the pixel that can remerge, such zone has just grown into, returning sub pixel chooses and repeats above step after step finds new sub pixel, after all pixels were labeled in the image, (i j) generated disparity map d.

Above-mentioned intelligent three-dimensional human face rebuilding method, in the preferred described three-dimensional facial reconstruction step, after obtaining the three-dimensional coordinate of people's face spatial hashing point cloud, comprise that also the three-dimensional point cloud to people's face carries out triangulation, grid segmentation and grid optimization step: the triangulation step, the hash point is sorted, the point of search X coordinate minimum, establishing this point is v 1, according to v 1Square series arrangement each point that increases progressively of the distance of point forms sequence v 1, v 2, v 3..., v n, with v 1With v 2The structure article one that links to each other limit is at v iSequential search and v in the sequence 1And v 2The point of conllinear is not remembered and is made v k, then with v kInsert v 3Before, move behind all the other dot sequencies, with v 1, v 2, v kThe grid cutting edge technology is adopted in 3 link to each other first triangle of formation and borders, initial mesh forward position next, and pointwise is outwards expanded, and forms initial people's face triangle gridding according to minimum interior angle maximal criterion; The grid fine division step adopts the Loop divided method, and this is a kind of segmentation based on the triangle control mesh, and the curved surface that this divided method generated is based on B-spline surface four times; The grid optimization step is adjusted the three-dimensional face model that the internal node position is optimized with the Laplacian fairing processing.

On the other hand, the invention also discloses a kind of intelligent three-dimensional human face rebuilding system based on binocular stereo vision, comprising: pretreatment module, people's face detect and feature point extraction module, camera calibration module, binocular solid matching module and three-dimensional facial reconstruction module.Wherein, pretreatment module is used for facial image is comprised the pretreatment operation of image normalization, brightness normalization and image rectification, comprises image normalization submodule, brightness normalization submodule and image rectification submodule; People's face detects and the feature point extraction module is used for detecting the human face region that obtains through the facial image of pretreatment operation, and the pedestrian's face characteristic point of going forward side by side extracts; Camera calibration module is used for obtaining the inside and outside parameter of video camera by projection matrix reconstruct object, obtains the camera calibration result; The binocular solid matching module is used for based on described human face characteristic point, gray scale simple crosscorrelation coupling operator is expanded to colouring information, and calculate the disparity map that three-dimensional coupling generates according to the information that comprises polar curve constraint, human face region constraint and people's face geometric condition; The three-dimensional facial reconstruction module is used for calculating according to the disparity map of described camera calibration result and described three-dimensional coupling generation the three-dimensional coordinate of people's face spatial hashing point cloud, generates three-dimensional face model.

Above-mentioned intelligent three-dimensional human face rebuilding system, in the preferred described pretreatment module, the image normalization submodule comprises: rotary unit, be used for described facial image is rotated, make two maintenance levels in the described facial image; The mid point adjustment unit, the mid point that is used to adjust the eyes line is positioned at the center of picture traverse, and described center is positioned at the fixed position on the height of described facial image; The scale transformation unit is used for described postrotational facial image is carried out scale transformation, obtains standard picture of the same size.

Above-mentioned intelligent three-dimensional human face rebuilding system, in the preferred described pretreatment module, described brightness normalization submodule comprises: the testing image energy calculation unit, be used to make I (i, j) expression i is capable, the gray-scale value of j row pixel calculates the energy of testing image The scale factor calculation unit is used to define an average face, obtains the energy AveryEnergy of described average face; Determine proportional factor r atio according to following formula: Brightness normalization unit is used for foundation Each pixel in the testing image is carried out the brightness normalized.

Above-mentioned intelligent three-dimensional human face rebuilding system, in the preferred described pre-treatment step, described image is corrected submodule and is used for: two width of cloth images are carried out the conversion of two dimension respectively; Original image at first passes through perspective transform, has been moved to infinity point in limit, and the polar curve bundle in the image carries out similarity transformation after just having become one group of parallel lines, the transverse axis of each polar curve and image coordinate system; In order to reduce distortion, carry out wrong contact transformation, make the pattern distortion of horizontal direction minimize.

Above-mentioned intelligent three-dimensional human face rebuilding system, preferred described people's face detect and the feature point extraction module in, described human face region is determined by colour of skin likelihood, being comprised: be used for coloured image is transformed into the YCbCr space by rgb space, according to two-dimentional Gauss model G (m, Λ 2) changing a width of cloth coloured image into gray level image, gray-scale value belongs to the possibility of skin area corresponding to this, gets threshold value by adaptive method then, gray level image further can be become the unit of bianry image g_ycbcr; Be used for coloured image is transformed into the YIQ space by rgb space, extract I component,, then be colour of skin point, otherwise be not colour of skin point, cut apart the unit that obtains bianry image g_yiq when the value of the I component of pixel during at 5<I<80; Be used for bianry image g_ycbcr and bianry image g_yiq and obtain the unit of image g_skin as AND-operation, be used for bianry image g_skin is made closure operation, its structural element is 3 * 3 unit matrix; The Minimum Area area of setting people's face is 50 pixels, and filling area is less than the unit of the area of skin color of 50 pixels then

Above-mentioned intelligent three-dimensional human face rebuilding system, in preferred described people's face detection and the feature point extraction module, described human face characteristic point extracts by active shape model, comprise: be used for that the eyes of opening or the eyes that closing are used general whole people's face shape template and come whole face of initialization, obtain the unit of the apparent position of two tail of the eyes; Be used to use local active shape model is estimated mouth to mouth profile, obtain mouth true edge by the acquisition of Canny operational character, if eye detection be opening and mouth detect and to be O shape mouth, select to widen the view and the whole face template of O shape mouth is searched for the unit of whole facial contour.

Above-mentioned intelligent three-dimensional human face rebuilding system is in the preferred described camera calibration module, based on the inside and outside parameter of plane gridiron pattern label calibrating camera.

Above-mentioned intelligent three-dimensional human face rebuilding system, in the preferred described binocular solid matching module, the described disparity map that calculates three-dimensional coupling generation is realized by choosing with region growing of sub pixel, comprise: sub pixel is chosen the unit, be used to select the edge feature point to carry out region growing as sub pixel, after finishing as the region growing of sub pixel with edge feature point, select a pixel that does not belong to any growth district, in the search window at one dimension under the parallax constraint of edge feature point, utilize the relative coefficient formula to calculate the coupling cost function, choose the most reliable parallax, choosing this point is sub pixel, and establishing this parallax is regional disparity, the region growing step if do not find any match point less than coupling cost thresholding T given in advance, repeats this step to next neighbor pixel then; The region growing unit is used to utilize the parallax value of sub pixel, calculates the coupling cost with sub pixel adjacent pixels point, the pixel that satisfies constraint condition is comprised into sub pixel region, otherwise abandon this point; The disparity map generation unit, be used for execution area growth step repeatedly, till not having the pixel that can remerge, such zone has just grown into, returning sub pixel chooses and repeats above step after step finds new sub pixel, after all pixels were labeled in the image, (i j) generated disparity map d.

Above-mentioned intelligent three-dimensional human face rebuilding system, in the preferred described three-dimensional facial reconstruction module, after obtaining the three-dimensional coordinate of people's face spatial hashing point cloud, also comprise the unit that the three-dimensional point cloud of people's face is carried out triangulation, grid segmentation and grid optimization: the triangulation unit, be used for the hash point is sorted, the point of search X coordinate minimum, establishing this point is v 1, according to v 1Square series arrangement each point that increases progressively of the distance of point forms sequence v 1, v 2, v 3..., v n, with v 1With v 2The structure article one that links to each other limit is at v iSequential search and v in the sequence 1And v 2The point of conllinear is not remembered and is made v k, then with v kInsert v 3Before, move behind all the other dot sequencies, with v 1, v 2, v kThe grid cutting edge technology is adopted in 3 link to each other first triangle of formation and borders, initial mesh forward position next, and pointwise is outwards expanded, and forms initial people's face triangle gridding according to minimum interior angle maximal criterion; Grid segmentation unit is used to adopt the Loop divided method, and this is a kind of segmentation based on the triangle control mesh, and the curved surface that this divided method generated is based on B-spline surface four times; The grid optimization unit is used for adjusting the three-dimensional face model that the internal node position is optimized with the Laplacian fairing processing.

In terms of existing technologies, the present invention rebuilds and has generated three-dimensional face model true to nature Paint Gloss by the processing to facial image normalized, the detection of people's face and feature point extraction, camera calibration, binocular solid coupling and three-dimensional facial reconstruction.

Description of drawings

Fig. 1 is the flow chart of steps of a kind of intelligent three-dimensional human face rebuilding method embodiment based on the binocular tri-dimensional vision system of the present invention;

Fig. 2 is the mathematical model synoptic diagram of linear model video camera;

Fig. 3 a is a binocular vision three-dimensional imaging principle schematic;

Fig. 3 b is a binocular vision three-dimensional imaging principle schematic;

Fig. 4 is the structural representation based on the intelligent three-dimensional human face rebuilding system of binocular stereo vision.

Embodiment

For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.

Referring to figs. 1 through Fig. 3 b embodiments of the invention are described.

Come invention is further described below in conjunction with the drawings and specific embodiments.

As shown in Figure 1, at first, after utilizing the left and right Image Acquisition facial image of camera acquisition, facial image is carried out pre-service, comprise image normalization, brightness normalization and image rectification, use colour of skin likelihood to estimate to find human face region then, and do human face characteristic point with active shape model (ASM) and extract, this helps using the priori of human face structure.During camera calibration, by projection matrix reconstruct object, obtain the inside and outside parameter of video camera, reduce computation complexity.By gray scale simple crosscorrelation coupling operator is expanded to colouring information, can partly mate then low-texture image such as people's face skin.Increase three-dimensional matching accuracy at matching stage in order to reduce the region of search, we consider that in matching algorithm polar curve constraint, human face region retrain and people's face geometric condition information is calculated parallax.At last, the disparity map that generates according to camera calibration result and three-dimensional coupling calculates the three-dimensional coordinate of people's face spatial hashing point cloud, to the three-dimensional point cloud of people's face carry out triangulation, grid segments and fairing processing, produces smooth three-dimensional face model true to nature.

Below concrete implementation process piecemeal is described;

The facial image normalized:

Because the people's face detection based on the image likelihood score is to depend on the correlativity of gradation of image on space distribution, therefore needs carry out series of preprocessing to facial image, to reach the purpose of position correction and gray scale normalizing.At first, carry out the image rotation, make two line maintenance levels of people's face, guaranteed the consistance of people's face direction.Secondly, make the mid point of the line of eyes in people's face be positioned at the center of picture traverse, and this mid point is positioned at fixing position on the height of image, guaranteed the consistance of the position of people's face.The 3rd, to the image zoom conversion, obtain the unified calibration image of size.

In addition because image can be subjected to the illumination effect of different directions inevitably, make in the image a certain side of people's face brighter or darker, regular meeting's influence detects or identification.For the influence of eliminating illumination and don't the proportionate relationship that changes the brightness of each pixel in the image, we have used the notion of image energy.The energy of image is defined as the quadratic sum of each grey scale pixel value in the image.For the image energy that makes each zone to be measured is identical, can get an average face earlier, obtain its energy averyEnergy, then divided by the image energy Energy in each zone to be measured 2, so, the ENERGY E nergy=Energy of the image normalization that each is to be measured 2* ratio has identical energy, and just the gray-scale value of the pixel of each testing image multiply by the square root of this ratio, promptly

Wish to obtain such result in the solid coupling: p sets up an office 1=(x 1, y 1, 1) and be a bit on the left image, then its corresponding polar curve equation on right image is y=y 1, corresponding point are p 2=(x 2, y 2, 1).Therefrom as can be seen, image need improve the efficient of coupling through overcorrect.

The process of image rectification is carried out the conversion of two dimension exactly respectively to two width of cloth images., promptly

U=U sU rU p

The execution sequence of conversion is from left to right.Original image at first passes through perspective transform.At this moment, limit has been moved to infinity point, and the polar curve bundle in the image has just become one group of parallel lines.Carry out similarity transformation then, at this moment, the transverse axis of each polar curve and image coordinate system, promptly each polar curve keeps level.At last,, carry out wrong contact transformation, make the pattern distortion of horizontal direction minimize in order to reduce distortion.

Detect with right people's face based on the colour of skin:

We select the YCbCr color model, and the coloured image of importing is carried out color space conversion.It is transformed into the mutual incoherent YCbCr color space of color component from the higher rgb space of correlativity.Conversion formula is:

Y Cb Cr 1 = 0.2990 0.5870 0.1140 0 - 0.1687 - 0.3313 - 0.5000 128 0.5000 - 0.4187 - 0.0813 128 0 0 0 0 · R G B 1

The YIQ color space derives from NTSC (NTSC), the monochrome information of Y component representative image wherein, two components of I, Q then carry colouring information, the change color of I component representative from orange to cyan, and Q component is then represented from purple to yellowish green change color.Wherein I component has been contained the color gamut of people's the colour of skin substantially, and it is the highest to the susceptibility of the color of a few class skins.People's face detection key step based on colour of skin likelihood is as follows:

(1) coloured image is transformed into the YCbCr space by rgb space.According to two-dimentional Gauss model G (m, Λ 2) changing a width of cloth coloured image into gray level image, gray-scale value belongs to the possibility of skin area corresponding to this, gets threshold value by adaptive method then, gray level image further can be become bianry image g_ycbcr.

(2) coloured image is transformed into the YIQ space by rgb space, extracts I component,, then be colour of skin point, otherwise be not colour of skin point, cut apart and obtain bianry image g_yiq when the value of the I component of pixel during at 5<I<80.

(3) bianry image g_ycbcr and bianry image g_yiq obtain image g_skin as AND-operation.

(4) bianry image g_skin is made closure operation, its structural element is 3 * 3 unit matrix, and filling area is less than the area of skin color (the Minimum Area area of setting people's face is 50 pixels) of 50 pixels then.

Facial characteristics point extracts:

Invisible point under blocking or not global shape and the texture constraint estimation of check point by using active appearance models (AAM).We define the human face characteristic point position mainly is around eyes, nose, eyebrow, mouth and location, people's face edge.These points provide the common shape information of any people's face.At first we come whole face of initialization to the general whole people's face shape template of opening of eyes (or the eyes that closing) application, therefore obtain the apparent position of two tail of the eyes.Then, we use local active shape model (ASM) and estimate that to mouth the profile of mouth obtains the mouth true edge that is obtained by the Canny operational character.If eye detection be opening and mouth detect and to be O shape mouth, select so to widen the view and the whole face template of O shape mouth is searched for whole facial contour; Other situation is similar.Make full use of the multiresolution search, we obtain the whole facial contour when ASM restrains or reach maximum convergence number of times.64 unique points are positioned at the people on the face automatically altogether.

Camera calibration:

As shown in Figure 2, extract the 3D information of people's face, must realize Camera calibration, promptly utilize camera model to obtain each inside and outside parameter of video camera according to one group of known condition from stereoscopic face image centering.Relatively Chang Yong camera model is the pin hole perspective model.

We adopt the camera marking method based on plane gridiron pattern label.Calculate for simplifying, establishing plane, template place is the Z of world coordinate system w=0 plane.The i column vector of representing R with ri.Have for the point on the stencil plane so:

s u v 1 = A r 1 r 2 r 3 t X w Y w 0 1 = A r 1 r 2 t X w Y w 1

Wherein, (u is v) with (X w, Y w, Z w) be the coordinate of spatial point under image coordinate system and world coordinate system, f x, f y, u 0, v 0Be intrinsic parameters of the camera, rotation matrix R and translation vector t describe the relation between video camera and the world coordinate system, are referred to as the external parameter of video camera.Order H=A[r 1r 2T]=[h 1h 2h 3], then following formula can be written as:

s m ‾ = H M ‾

Wherein, A is an intrinsic parameters of the camera, is a point on the template and a mapping between its picture point.If the volume coordinate and the image coordinate of known template point are by separating least squares equation and with the further refinement of Levenberg-Marquarat algorithm, just can asking the inside and outside parameter of video camera in view of the above in the hope of the H matrix.Utilize Choleski to decompose and just can obtain confidential reference items, utilize matrix A and H then, ask the outer parameter of camera, formula is as follows:

r 1 = λ A - 1 h 1 , r 2 = λ A - 1 h 2 , r 3 = r 1 × r 2 t = λ A - 1 h 3 , λ = 1 | | A - 1 h 1 | | = 1 | | A - 1 h 2 | |

The pattern distortion meeting produces bigger influence to calibration result.Usually, demarcate the main radial distortion of considering, ignore other distortion factor.Here, we adopt the method for Zhang to handle the distortion problem.

The binocular solid coupling:

The selection of the size of match window and similarity measurement operator will directly influence the precision and the efficient of coupling.Weigh the advantages and disadvantages, this patent compromise selects 5 * 5 windows to carry out characteristic matching.And in the selection of similarity operator, should on the basis of comparing its effect, select appropriate operators.We select relative coefficient (cross correlation function) r (d for use x, d y) as the similarity measurement operator, its computing formula is shown below:

r ( d x , d y ) = { Σ ( x , y ) ∈ S [ f 1 ( x , y ) - f ‾ 1 ] [ f 2 ( x + d x , y + d y ) - f ‾ 2 ] } /

{ { Σ ( x , y ) ∈ S [ f 1 ( x , y ) - f ‾ 1 ] 2 Σ ( x , y ) ∈ S [ f 2 ( x + d x , y + d y ) - f ‾ 2 ] 2 } 1 / 2 }

Wherein With All pixel grey scale mean values in two zones of being mated, the unique point with maximum related value are exactly the matching characteristic point.

The constraint condition that we adopt is as follows:

1) human face region constraint: utilize the accurate human face region that obtains previously, add the polar curve constraint and just can improve a lot on the performance searching in the small area very much.

2) people's face symmetric constraints: such as, any among the left figure is in the left side of people's face, and then corresponding match point in right figure can only be in the left side of people's face, if the point that matches is on the right side, then one is decided to be the mistake match point, need delete, again coupling is done like this and has also further been reduced the hunting zone.

3) maximum disparity constraint: the limit search scope is within certain parallax value.

Two committed steps based on the region growing Stereo Matching Algorithm are:

1) selects the one group of sub pixel that can correctly represent desired zone;

2) determine the criterion that in growth course, neighbor can be included.

Here, we set that neighbor is merged to the criterion in zone at sub pixel place is as follows: for the point adjacent with sub pixel, with the parallax value d substitution coupling cost function of sub pixel, if satisfy e (i, j, d)<T, wherein T is a coupling cost thresholding given in advance.So just neighbor is included, at this moment all pixels that merge to the sub pixel region have identical parallax value, and promptly the whole growth zone has identical parallax value.Above-mentioned criterion can be described as:

d ( i + 1 , j ) = d ( i , j ) , e ( i + 1 , j , d ( i , j ) ) ≤ T d ( i + 1 , j ) = d min { e ( i + 1 , j , d ) } , e ( i + 1 , j , d ( i , j ) ) > T

In fact, this algorithm can be understood as the Stereo Matching Algorithm based on the parallax growth.Specific algorithm is as follows:

Choosing of sub pixel: during initialization, we select the edge feature point to carry out region growing as sub pixel.After finishing as the region growing of sub pixel with edge feature point, select a pixel that does not belong to any growth district, in the search window at one dimension under the parallax constraint of edge feature point, utilize relative coefficient (cross correlation function) formula to calculate the coupling cost function, choose the most reliable parallax.Choosing this point is sub pixel, and to establish this parallax be regional disparity, carries out for second step then.If do not find any match point, next neighbor pixel repeated this step less than coupling cost thresholding T given in advance.

The growth in zone: utilize the parallax value of sub pixel, calculate coupling cost with sub pixel adjacent pixels point.If satisfy following formula, then this pixel is comprised into sub pixel region, otherwise abandon this point.

Disparity map generates: carried out for second step repeatedly, till not having the pixel that can remerge, such zone has just grown into.Return and repeat above step after the first step finds new sub pixel.After all pixels are labeled in the image, then finish algorithm.At this moment, (i j) generates disparity map d.

Three-dimensional facial reconstruction

In this patent, the binocular tri-dimensional vision system that we adopt is made up of two identical video cameras, and each respective shaft of their coordinate systems is parallel, i.e. the parallel optical axis model.If focus of camera is f, the distance (baseline) between two photocentres is b, is world coordinate system with left camera coordinates, and the world coordinates of spatial point P is that (Z), the coordinate under the left and right cameras coordinate system is respectively (x for X, Y 1, y 1, z 1) and (x 2, y 2, z 2), the coordinate on the plane of delineation of the left and right sides is (u 1, v 1) and (u 2, v 2), parallax d=u wherein 1-u 2

Thus, but the three-dimensional coordinate of computer memory point:

X = z f u 1 = u 1 · b d Y = z f v 1 = v 1 · b d Z = f · b d

Following formula is contacted directly the degree of depth Z of three-dimensional body and parallax d, and the degree of depth has reflected three-dimensional spatial information.If known base line and focal length are determined just can calculate the Z coordinate that P is ordered behind the parallax d, and then are obtained world coordinates X and Y that P is ordered.Like this, we use following formula to obtain the three-dimensional coordinate of all match points of people's face stereo image pair.In order to obtain depth map Paint Gloss, we used 5 * 5 median filter that parallax d is carried out smoothing processing before compute depth Z.

After the three-dimensional coordinate of all match points all calculated, we can directly show these discrete points in three dimensions, but in order to obtain the three-dimensional face model of the sense of reality, we are necessary to carry out the triangulation of three-dimensional data points.If we directly carry out triangulation in three dimensions, position problems that so must three consecutive point of room for discussion promptly can not comprise other point in the spheroid that is constituted with these three points.More complicated under the circumstances, we carry out the trigonometric ratio of subpoint earlier with the man-to-man two dimensional surface zone that projects to of three-dimensional data points on 2 dimensional region, then the triangulation of implementation space point basically of the corresponding relation by point.In order to obtain the triangular mesh of high-quality three-dimensional face model, we carry out following operation to people's face hash number strong point:

Step 1, triangulation: hash point cloud is sorted, the point of search X coordinate minimum, establishing this point is v 1, according to v 1Square series arrangement each point that increases progressively of the distance of point forms sequence v 1, v 2, v 3... v nWith v 1With v 2The structure article one that links to each other limit is at v iSequential search and v in the sequence 1And v 2The point of conllinear is not remembered and is made v k, then with v kInsert v 3Before, move behind all the other dot sequencies, with v 1, v 2, v k3 link to each other first triangle of formation and borders, initial mesh forward position.Next adopt the grid cutting edge technology, pointwise is outwards expanded, and forms initial people's face triangle gridding according to minimum interior angle maximal criterion.

Step 2, the grid segmentation: adopt the Loop divided method, this is a kind of segmentation based on the triangle control mesh, and the curved surface that this divided method generated is based on B-spline surface four times.Proved that now subdivision curved surface reaches C at the regular point place 2Continuously, reach C at the singular point place 1Continuously, have characteristics such as subdivision rules is simple, segmentation back slickness is good.The Loop algorithm of subdivision is an approach type 1-4 splitting-up method, basic thought is to be divided into two sections behind node of new insertion on leg-of-mutton each limit, former vertex of a triangle will be replaced by new node, so a triangular element will be replaced by four little triangular elements.The insertion point create-rule is as follows:

(1) if internal edges has two vertex v 0And v 1, two triangles sharing this limit are (v 0, v 1, v 2) and (v 0, v 1, v 3), then new edge point v EFor:

(2) if the 1-neighborhood summit of internal vertex v is v i(i=0,1 ..., n-1), then newly-generated summit is v V:

Wherein, β is the adjoint point weights.

β = 1 n ( 5 8 - ( 3 8 + 1 4 cos 2 π n ) 2 ) ; n = | v | E .

(3) if two summits of boundary edge are v 0And v 1, then new edge point v EFor:

(4) if border vertices v is v at borderline two adjacent vertexs 0And v 1, then newly-generated summit is:

Calculate the edge point v on each bar limit of grid according to the create-rule of insertion point EVertex v with each summit V, new edge point and new summit are coupled together, generate a new triangle gridding, to the last converge on limit curved surface.

Step 3 grid optimization: the three-dimensional face model that is optimized with Laplacian fairing technological adjustment internal node position.If the triangular mesh vertex v, v i(i=0,1 ..., n-1) be and n the grid vertex of its adjacency that then single order, the second order Laplacian operator definitions to vertex v is:

U ( v ) = 1 n Σ i = 0 n - 1 v i - v , U 2 ( v ) = 1 n Σ i = 0 n - 1 U ( v i ) - v

We use iterative formula to the summit:

Wherein, Here n iAnd n i, j is respectively culminating point p iWith its in-degree of j consecutive point.

On the other hand, the present invention also provides a kind of intelligent three-dimensional human face rebuilding system based on binocular stereo vision, and this system comprises:

As shown in Figure 4, pretreatment module 40 is used for facial image is comprised the pretreatment operation of image normalization, brightness normalization and image rectification; People's face detects and feature point extraction module 41, is used for detecting the human face region that obtains through the facial image of pretreatment operation, and the pedestrian's face characteristic point of going forward side by side extracts; Camera calibration module 42 is used for obtaining the inside and outside parameter of video camera by projection matrix reconstruct object, obtains the camera calibration result; Binocular solid matching module 43 is used for based on described human face characteristic point, gray scale simple crosscorrelation coupling operator is expanded to colouring information, and calculate the disparity map that three-dimensional coupling generates according to the information that comprises polar curve constraint, human face region constraint and people's face geometric condition; Three-dimensional facial reconstruction module 44 is used for the three-dimensional coordinate according to the disparity map calculating people face spatial hashing point cloud of described camera calibration result and described three-dimensional coupling generation, generates three-dimensional face model.

Principle based on the intelligent three-dimensional human face rebuilding system of binocular stereo vision is similar to method embodiment, mutually with reference to getting final product, does not repeat them here each other.

More than a kind of intelligent three-dimensional human face rebuilding method and system based on binocular stereo vision provided by the present invention described in detail, used specific embodiment herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, part in specific embodiments and applications all can change.In sum, this description should not be construed as limitation of the present invention.

Claims (18)

1. the intelligent three-dimensional human face rebuilding method based on binocular stereo vision is characterized in that, comprises the steps:
Pre-treatment step comprises the pretreatment operation of image normalization, brightness normalization and image rectification to facial image;
People's face detects and the feature point extraction step, detects the human face region that obtains through in the facial image of pretreatment operation, and the pedestrian's face characteristic point of going forward side by side extracts;
The camera calibration step by projection matrix reconstruct object, obtains the inside and outside parameter of video camera, obtains the camera calibration result;
Binocular solid coupling step based on described human face characteristic point, expands to colouring information with gray scale simple crosscorrelation coupling operator, and calculates the disparity map that three-dimensional coupling generates according to the information that comprises polar curve constraint, human face region constraint and people's face geometric condition;
The three-dimensional facial reconstruction step according to described camera calibration result and the described three-dimensional three-dimensional coordinate that the disparity map that generates calculates people's face spatial hashing point cloud that mates, generates three-dimensional face model.
2. intelligent three-dimensional human face rebuilding method according to claim 1 is characterized in that, in the described pre-treatment step, described image normalization is handled and comprised:
The rotation step is rotated described facial image, makes two maintenance levels in the described facial image;
The mid point set-up procedure, the mid point of adjusting the eyes line is positioned at the center of picture traverse, and described center is positioned at the fixed position on the height of described facial image;
The scale transformation step is carried out scale transformation to described postrotational facial image, obtains standard picture of the same size.
3. intelligent three-dimensional human face rebuilding method according to claim 2 is characterized in that, in the described pre-treatment step, described brightness normalized comprises:
Testing image energy calculation procedure makes I that (i, j) expression i is capable, the gray-scale value of j row pixel, the energy of calculating testing image
The scale factor calculation step defines an average face, obtains the energy AveryEnergy of described average face; Determine proportional factor r atio according to following formula:
AveryEnergy Energy 2 = ratio ;
Brightness normalization step, foundation Each pixel in the testing image is carried out the brightness normalized.
4. intelligent three-dimensional human face rebuilding method according to claim 3 is characterized in that, in the described pre-treatment step, the processing of described image rectification comprises:
Two width of cloth images are carried out the conversion of two dimension respectively; Original image at first passes through perspective transform, has been moved to infinity point in limit, and the polar curve bundle in the image carries out similarity transformation after just having become one group of parallel lines, the transverse axis of each polar curve and image coordinate system; In order to reduce distortion, carry out wrong contact transformation, make the pattern distortion of horizontal direction minimize.
5. intelligent three-dimensional human face rebuilding method according to claim 4 is characterized in that, in the described pre-treatment step, in described people's face detection and the feature point extraction step, described human face region is determined by colour of skin likelihood, comprised the steps:
Coloured image is transformed into the YCbCr space by rgb space, according to two-dimentional Gauss model G (m, Λ 2) changing a width of cloth coloured image into gray level image, gray-scale value belongs to the possibility of skin area corresponding to this, gets threshold value by adaptive method then, gray level image further can be become bianry image g_cbcr;
Coloured image is transformed into the YIQ space by rgb space, extracts I component,, then be colour of skin point, otherwise be not colour of skin point, cut apart and obtain bianry image g_yiq when the value of the I component of pixel during at 5<I<80;
Bianry image g_ycbcr and bianry image g_yiq obtain image g_skin as AND-operation;
G_skin makes closure operation to bianry image, and its structural element is 3 * 3 unit matrix; The Minimum Area area of setting people's face is 50 pixels, and filling area is less than the area of skin color of 50 pixels then.
6. intelligent three-dimensional human face rebuilding method according to claim 5 is characterized in that, in described people's face detection and the feature point extraction step, described human face characteristic point extracts by active shape model and comprises:
The eyes of opening or the eyes that closing are used general whole people's face shape template come whole face of initialization, obtain the apparent position of two tail of the eyes;
Use local active shape model and estimate the profile of mouth, obtain the mouth true edge that obtains by the Canny operational character to mouth, if eye detection be opening and mouth detect and be O shape mouth, select to widen the view and the whole face template of O shape mouth is searched for whole facial contour.
7. intelligent three-dimensional human face rebuilding method according to claim 6 is characterized in that, in the described camera calibration step, based on the inside and outside parameter of plane gridiron pattern label calibrating camera.
8. intelligent three-dimensional human face rebuilding method according to claim 7 is characterized in that, in the described binocular solid coupling step, the described disparity map that calculates three-dimensional coupling generation is realized by choosing with region growing of sub pixel, comprising:
Sub pixel is chosen step, select edge feature point to carry out region growing as sub pixel, after finishing as the region growing of sub pixel with edge feature point, select a pixel that does not belong to any growth district, in the search window at one dimension under the parallax constraint of edge feature point, utilize the relative coefficient formula to calculate the coupling cost function, choose the most reliable parallax, choosing this point is sub pixel, and establishing this parallax is regional disparity, region growing step then, if do not find any match point, next neighbor pixel repeated this step less than coupling cost thresholding T given in advance;
The region growing step is utilized the parallax value of sub pixel, calculates the coupling cost with sub pixel adjacent pixels point, the pixel that satisfies constraint condition is comprised into sub pixel region, otherwise abandon this point;
Disparity map generates step, execution area growth step repeatedly, till not having the pixel that can remerge, such zone has just grown into, returning sub pixel chooses and repeats above step after step finds new sub pixel, after all pixels were labeled in the image, (i j) generated disparity map d.
9. intelligent three-dimensional human face rebuilding method according to claim 8, it is characterized in that, in the described three-dimensional facial reconstruction step, obtain the three-dimensional coordinate of people's face spatial hashing point cloud after, comprise that also the three-dimensional point cloud to people's face carries out triangulation, grid segmentation and grid optimization step:
The triangulation step sorts to the hash point, the point of search X coordinate minimum, and establishing this point is v 1, according to v 1Square series arrangement each point that increases progressively of the distance of point forms sequence v 1, v 2, v 2..., v n, with v 1With v 2The structure article one that links to each other limit is at v iSequential search and v in the sequence 1And v 2The point of conllinear is not remembered and is made v k, then with v kInsert v 3Before, move behind all the other dot sequencies, with v 1, v 2, v kThe grid cutting edge technology is adopted in 3 link to each other first triangle of formation and borders, initial mesh forward position next, and pointwise is outwards expanded, and forms initial people's face triangle gridding according to minimum interior angle maximal criterion;
The grid fine division step adopts the Loop divided method, and this is a kind of segmentation based on the triangle control mesh, and the curved surface that this divided method generated is based on B-spline surface four times;
The grid optimization step is adjusted the three-dimensional face model that the internal node position is optimized with the Laplacian fairing processing.
10. the intelligent three-dimensional human face rebuilding system based on binocular stereo vision is characterized in that, comprising:
Pretreatment module is used for facial image is comprised the pretreatment operation of image normalization, brightness normalization and image rectification, comprises that image normalization submodule, brightness normalization submodule and image correct submodule;
People's face detects and the feature point extraction module, is used for detecting the human face region that obtains through the facial image of pretreatment operation, and the pedestrian's face characteristic point of going forward side by side extracts;
Camera calibration module is used for obtaining the inside and outside parameter of video camera by projection matrix reconstruct object, obtains the camera calibration result;
The binocular solid matching module is used for based on described human face characteristic point, gray scale simple crosscorrelation coupling operator is expanded to colouring information, and calculate the disparity map that three-dimensional coupling generates according to the information that comprises polar curve constraint, human face region constraint and people's face geometric condition;
The three-dimensional facial reconstruction module is used for the three-dimensional coordinate according to the disparity map calculating people face spatial hashing point cloud of described camera calibration result and described three-dimensional coupling generation, generates three-dimensional face model.
11. intelligent three-dimensional human face rebuilding according to claim 10 system is characterized in that in the described pretreatment module, the image normalization submodule comprises:
Rotary unit is used for described facial image is rotated, and makes two maintenance levels in the described facial image;
The mid point adjustment unit, the mid point that is used to adjust the eyes line is positioned at the center of picture traverse, and described center is positioned at the fixed position on the height of described facial image;
The scale transformation unit is used for described postrotational facial image is carried out scale transformation, obtains standard picture of the same size.
12. intelligent three-dimensional human face rebuilding according to claim 11 system is characterized in that in the described pretreatment module, described brightness normalization submodule comprises:
The testing image energy calculation unit is used to make I that (i, j) expression i is capable, the gray-scale value of j row pixel, the energy of calculating testing image
The scale factor calculation unit is used to define an average face, obtains the energy AveryEnergy of described average face; Determine proportional factor r atio according to following formula:
AveryEnergy Energ y 2 = ratio ;
Brightness normalization unit is used for foundation Each pixel in the testing image is carried out the brightness normalized.
13. intelligent three-dimensional human face rebuilding according to claim 12 system is characterized in that, in the described pre-treatment step, described image is corrected submodule and is used for:
Two width of cloth images are carried out the conversion of two dimension respectively; Original image at first passes through perspective transform, has been moved to infinity point in limit, and the polar curve bundle in the image carries out similarity transformation after just having become one group of parallel lines, the transverse axis of each polar curve and image coordinate system; In order to reduce distortion, carry out wrong contact transformation, make the pattern distortion of horizontal direction minimize.
14. intelligent three-dimensional human face rebuilding according to claim 13 system is characterized in that, described people's face detect and the feature point extraction module in, described human face region is determined by colour of skin likelihood, being comprised:
Be used for coloured image is transformed into the YCbCr space by rgb space, according to two-dimentional Gauss model G (m, Λ 2) changing a width of cloth coloured image into gray level image, gray-scale value belongs to the possibility of skin area corresponding to this, gets threshold value by adaptive method then, gray level image further can be become the unit of bianry image g_ycbcr;
Be used for coloured image is transformed into the YIQ space by rgb space, extract I component,, then be colour of skin point, otherwise be not colour of skin point, cut apart the unit that obtains bianry image g_yiq when the value of the I component of pixel during at 5<I<80
Be used for bianry image g_ycbcr and bianry image g_yiq obtain image g_skin as AND-operation unit
Be used for bianry image g_skin is made closure operation, its structural element is 3 * 3 unit matrix; The Minimum Area area of setting people's face is 50 pixels, and filling area is less than the unit of the area of skin color of 50 pixels then
15. intelligent three-dimensional human face rebuilding according to claim 14 system is characterized in that, described people's face detect and the feature point extraction module in, described human face characteristic point extracts by active shape model, comprising:
Be used for that the eyes of opening or the eyes that closing are used general whole people's face shape template and come whole face of initialization, obtain the unit of the apparent position of two tail of the eyes;
Be used to use local active shape model is estimated mouth to mouth profile, obtain mouth true edge by the acquisition of Canny operational character, if eye detection be opening and mouth detect and to be O shape mouth, select to widen the view and the whole face template of O shape mouth is searched for the unit of whole facial contour.
16. intelligent three-dimensional human face rebuilding according to claim 15 system is characterized in that, in the described camera calibration module, based on the inside and outside parameter of plane gridiron pattern label calibrating camera.
17. intelligent three-dimensional human face rebuilding according to claim 16 system is characterized in that, in the described binocular solid matching module, describedly calculates the disparity map that three-dimensional coupling generates and realizes by choosing with region growing of sub pixel, comprising:
Sub pixel is chosen the unit, be used to select the edge feature point to carry out region growing as sub pixel, after finishing as the region growing of sub pixel with edge feature point, select a pixel that does not belong to any growth district, in the search window at one dimension under the parallax constraint of edge feature point, utilize the relative coefficient formula to calculate the coupling cost function, choose the most reliable parallax, choosing this point is sub pixel, and establishing this parallax is regional disparity, region growing step then, if do not find any match point, next neighbor pixel repeated this step less than coupling cost thresholding T given in advance;
The region growing unit is used to utilize the parallax value of sub pixel, calculates the coupling cost with sub pixel adjacent pixels point, the pixel that satisfies constraint condition is comprised into sub pixel region, otherwise abandon this point;
The disparity map generation unit, be used for execution area growth step repeatedly, till not having the pixel that can remerge, such zone has just grown into, returning sub pixel chooses and repeats above step after step finds new sub pixel, after all pixels were labeled in the image, (i j) generated disparity map d.
18. intelligent three-dimensional human face rebuilding according to claim 17 system, it is characterized in that, in the described three-dimensional facial reconstruction module, obtain the three-dimensional coordinate of people's face spatial hashing point cloud after, comprise that also the three-dimensional point cloud to people's face carries out the unit of triangulation, grid segmentation and grid optimization:
The triangulation unit is used for the hash point is sorted, the point of search X coordinate minimum, and establishing this point is v 1, according to v 1Square series arrangement each point that increases progressively of the distance of point forms sequence v 1, v 2, v 3..., v n, with v 1With v 2The structure article one that links to each other limit is at v iSequential search and v in the sequence 1And v 2The point of conllinear is not remembered and is made v k, then with v kInsert v 3Before, move behind all the other dot sequencies, with v 1, v 2, v kThe grid cutting edge technology is adopted in 3 link to each other first triangle of formation and borders, initial mesh forward position next, and pointwise is outwards expanded, and forms initial people's face triangle gridding according to minimum interior angle maximal criterion;
Grid segmentation unit is used to adopt the Loop divided method, and this is a kind of segmentation based on the triangle control mesh, and the curved surface that this divided method generated is based on B-spline surface four times;
The grid optimization unit is used for adjusting the three-dimensional face model that the internal node position is optimized with the Laplacian fairing processing.
CN 201010209794 2010-06-18 2010-06-18 Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system CN101866497A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201010209794 CN101866497A (en) 2010-06-18 2010-06-18 Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201010209794 CN101866497A (en) 2010-06-18 2010-06-18 Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system

Publications (1)

Publication Number Publication Date
CN101866497A true CN101866497A (en) 2010-10-20

Family

ID=42958211

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201010209794 CN101866497A (en) 2010-06-18 2010-06-18 Binocular stereo vision based intelligent three-dimensional human face rebuilding method and system

Country Status (1)

Country Link
CN (1) CN101866497A (en)

Cited By (48)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509334A (en) * 2011-09-21 2012-06-20 北京捷成世纪科技股份有限公司 Method for converting virtual 3D (Three-Dimensional) scene into 3D view
CN102521586A (en) * 2011-12-08 2012-06-27 中国科学院苏州纳米技术与纳米仿生研究所 High-resolution three-dimensional face scanning method for camera phone
CN102592309A (en) * 2011-12-26 2012-07-18 北京工业大学 Modeling method of nonlinear three-dimensional face
CN102799879A (en) * 2012-07-12 2012-11-28 中国科学技术大学 Method for identifying multi-language multi-font characters from natural scene image
CN103093498A (en) * 2013-01-25 2013-05-08 西南交通大学 Three-dimensional human face automatic standardization method
CN103218612A (en) * 2013-05-13 2013-07-24 苏州福丰科技有限公司 3D (Three-Dimensional) face recognition method
WO2013134998A1 (en) * 2012-03-12 2013-09-19 中兴通讯股份有限公司 3d reconstruction method and system
CN103323209A (en) * 2013-07-02 2013-09-25 清华大学 Structural modal parameter identification system based on binocular stereo vision
CN103325140A (en) * 2012-03-19 2013-09-25 中兴通讯股份有限公司 Three-dimensional reconstruction method and system
CN103424070A (en) * 2012-05-23 2013-12-04 鸿富锦精密工业(深圳)有限公司 Curved face coordinate system set-up system and method
CN103440674A (en) * 2013-06-13 2013-12-11 厦门美图网科技有限公司 Method for rapidly generating crayon special effect of digital image
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN103606149A (en) * 2013-11-14 2014-02-26 深圳先进技术研究院 Method and apparatus for calibration of binocular camera and binocular camera
CN103778598A (en) * 2012-10-17 2014-05-07 株式会社理光 Method and device for disparity map improving
CN103942558A (en) * 2013-01-22 2014-07-23 日电(中国)有限公司 Method and apparatus for obtaining object detectors
CN103971356A (en) * 2013-02-04 2014-08-06 腾讯科技(深圳)有限公司 Street scene image segmenting method and device based on parallax information
CN104112270A (en) * 2014-05-14 2014-10-22 苏州科技学院 Random point matching algorithm based on self-adaptive weight multiple-dimensioned window
CN104126989A (en) * 2014-07-30 2014-11-05 福州大学 Foot surface three-dimensional information obtaining method based on multiple RGB-D cameras
CN104183011A (en) * 2013-05-27 2014-12-03 万克林 Three-dimensional interactive virtual reality (3D IVR) restoring system
CN104408772A (en) * 2014-11-14 2015-03-11 江南大学 Grid projection-based three-dimensional reconstructing method for free-form surface
WO2015067071A1 (en) * 2013-11-05 2015-05-14 深圳市云立方信息科技有限公司 Method and device for converting virtual view into stereoscopic view
CN104641395A (en) * 2012-10-24 2015-05-20 索尼公司 Image processing device and image processing method
CN104657561A (en) * 2015-03-10 2015-05-27 南京脚度健康科技有限公司 Shoe making method applying three-dimensional sole data
CN104665107A (en) * 2015-03-10 2015-06-03 南京脚度健康科技有限公司 Three-dimensional data acquisition and processing system and three-dimensional data acquisition and processing method for soles
CN104700414A (en) * 2015-03-23 2015-06-10 华中科技大学 Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
CN105096376A (en) * 2014-04-30 2015-11-25 联想(北京)有限公司 Information processing method and electronic device
CN105354856A (en) * 2015-12-04 2016-02-24 北京联合大学 Human matching and positioning method and system based on MSER and ORB
WO2016070300A1 (en) * 2014-11-07 2016-05-12 Xiaoou Tang System and method for detecting genuine user
CN105701455A (en) * 2016-01-05 2016-06-22 安阳师范学院 Active shape model (ASM) algorithm-based face characteristic point acquisition and three dimensional face modeling method
WO2016123913A1 (en) * 2015-02-04 2016-08-11 华为技术有限公司 Data processing method and apparatus
CN106127170A (en) * 2016-07-01 2016-11-16 重庆中科云丛科技有限公司 A kind of merge the training method of key feature points, recognition methods and system
CN106295530A (en) * 2016-07-29 2017-01-04 北京小米移动软件有限公司 Face identification method and device
CN103927747B (en) * 2014-04-03 2017-01-11 北京航空航天大学 Face matching space registration method based on human face biological characteristics
WO2017008226A1 (en) * 2015-07-13 2017-01-19 深圳大学 Three-dimensional facial reconstruction method and system
CN106462724A (en) * 2014-04-11 2017-02-22 北京市商汤科技开发有限公司 Methods and systems for verifying face images based on canonical images
CN104077804B (en) * 2014-06-09 2017-03-01 广州嘉崎智能科技有限公司 A kind of method based on multi-frame video picture construction three-dimensional face model
CN106485453A (en) * 2016-10-31 2017-03-08 华电重工股份有限公司 A kind of stock ground management method and device
CN106683174A (en) * 2016-12-23 2017-05-17 上海斐讯数据通信技术有限公司 3D reconstruction method, apparatus of binocular visual system, and binocular visual system
CN106910222A (en) * 2017-02-15 2017-06-30 中国科学院半导体研究所 Face three-dimensional rebuilding method based on binocular stereo vision
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A kind of three-dimensional automatic Beijing Opera facial mask making-up method
CN107016640A (en) * 2017-04-06 2017-08-04 广州爱图互联网有限公司 Picture energy normalized processing method and system based on multi-resolution decomposition
CN107092857A (en) * 2011-09-27 2017-08-25 株式会社斯巴鲁 Image processing apparatus
CN108062544A (en) * 2018-01-19 2018-05-22 百度在线网络技术(北京)有限公司 For the method and apparatus of face In vivo detection
CN108335664A (en) * 2018-01-19 2018-07-27 长春希达电子技术有限公司 The method that camera image curved surface is demarcated using two-dimentional relative movement mode
CN108810477A (en) * 2018-06-27 2018-11-13 江苏源控全程消防有限公司 For the method for comprehensive video training platform remote monitoring
GB2566050A (en) * 2017-08-31 2019-03-06 Imagination Tech Ltd Luminance-normalised colour spaces
CN109816791A (en) * 2019-01-31 2019-05-28 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109920011A (en) * 2019-05-16 2019-06-21 长沙智能驾驶研究院有限公司 Outer ginseng scaling method, device and the equipment of laser radar and binocular camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002269559A (en) * 2001-03-06 2002-09-20 Toshiba Corp Template-matching method of image, and image processing device
US20040179750A1 (en) * 2002-09-10 2004-09-16 Takeshi Kumakura Image processing apparatus, image processing method and image processing program for magnifying an image
CN1617175A (en) * 2004-12-09 2005-05-18 上海交通大学 Human limb three-dimensional model building method based on labelling point

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002269559A (en) * 2001-03-06 2002-09-20 Toshiba Corp Template-matching method of image, and image processing device
US20040179750A1 (en) * 2002-09-10 2004-09-16 Takeshi Kumakura Image processing apparatus, image processing method and image processing program for magnifying an image
CN1617175A (en) * 2004-12-09 2005-05-18 上海交通大学 Human limb three-dimensional model building method based on labelling point

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《中国科技信息》 20060831 段新涛 等 基于肤色和图像似然度的人脸检测 第142-143页 1-18 , 第15期 2 *
《智能系统学报》 20091231 贾贝贝 等 双目立体视觉的三维人脸重建方法 第513-520页 1-18 第4卷, 第6期 2 *

Cited By (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509334A (en) * 2011-09-21 2012-06-20 北京捷成世纪科技股份有限公司 Method for converting virtual 3D (Three-Dimensional) scene into 3D view
CN102509334B (en) * 2011-09-21 2014-02-12 北京捷成世纪科技股份有限公司 Method for converting virtual 3D (Three-Dimensional) scene into 3D view
CN107092857A (en) * 2011-09-27 2017-08-25 株式会社斯巴鲁 Image processing apparatus
CN102521586A (en) * 2011-12-08 2012-06-27 中国科学院苏州纳米技术与纳米仿生研究所 High-resolution three-dimensional face scanning method for camera phone
CN102521586B (en) * 2011-12-08 2014-03-12 中国科学院苏州纳米技术与纳米仿生研究所 High-resolution three-dimensional face scanning method for camera phone
CN102592309B (en) * 2011-12-26 2014-05-07 北京工业大学 Modeling method of nonlinear three-dimensional face
CN102592309A (en) * 2011-12-26 2012-07-18 北京工业大学 Modeling method of nonlinear three-dimensional face
WO2013134998A1 (en) * 2012-03-12 2013-09-19 中兴通讯股份有限公司 3d reconstruction method and system
CN103325140A (en) * 2012-03-19 2013-09-25 中兴通讯股份有限公司 Three-dimensional reconstruction method and system
CN103424070A (en) * 2012-05-23 2013-12-04 鸿富锦精密工业(深圳)有限公司 Curved face coordinate system set-up system and method
CN102799879B (en) * 2012-07-12 2014-04-02 中国科学技术大学 Method for identifying multi-language multi-font characters from natural scene image
CN102799879A (en) * 2012-07-12 2012-11-28 中国科学技术大学 Method for identifying multi-language multi-font characters from natural scene image
CN103778598B (en) * 2012-10-17 2016-08-03 株式会社理光 Disparity map ameliorative way and device
CN103778598A (en) * 2012-10-17 2014-05-07 株式会社理光 Method and device for disparity map improving
CN104641395A (en) * 2012-10-24 2015-05-20 索尼公司 Image processing device and image processing method
CN104641395B (en) * 2012-10-24 2018-08-14 索尼公司 Image processing equipment and image processing method
CN103942558A (en) * 2013-01-22 2014-07-23 日电(中国)有限公司 Method and apparatus for obtaining object detectors
CN103093498A (en) * 2013-01-25 2013-05-08 西南交通大学 Three-dimensional human face automatic standardization method
CN103971356A (en) * 2013-02-04 2014-08-06 腾讯科技(深圳)有限公司 Street scene image segmenting method and device based on parallax information
CN103971356B (en) * 2013-02-04 2017-09-08 腾讯科技(深圳)有限公司 Street view image Target Segmentation method and device based on parallax information
CN103530599A (en) * 2013-04-17 2014-01-22 Tcl集团股份有限公司 Method and system for distinguishing real face and picture face
CN103218612A (en) * 2013-05-13 2013-07-24 苏州福丰科技有限公司 3D (Three-Dimensional) face recognition method
CN104183011A (en) * 2013-05-27 2014-12-03 万克林 Three-dimensional interactive virtual reality (3D IVR) restoring system
CN103440674A (en) * 2013-06-13 2013-12-11 厦门美图网科技有限公司 Method for rapidly generating crayon special effect of digital image
CN103440674B (en) * 2013-06-13 2016-06-22 厦门美图网科技有限公司 A kind of rapid generation of digital picture wax crayon specially good effect
CN103323209B (en) * 2013-07-02 2016-04-06 清华大学 Based on the structural modal parameter identification system of binocular stereo vision
CN103323209A (en) * 2013-07-02 2013-09-25 清华大学 Structural modal parameter identification system based on binocular stereo vision
US9704287B2 (en) 2013-11-05 2017-07-11 Shenzhen Cloud Cube Information Tech Co., Ltd. Method and apparatus for achieving transformation of a virtual view into a three-dimensional view
WO2015067071A1 (en) * 2013-11-05 2015-05-14 深圳市云立方信息科技有限公司 Method and device for converting virtual view into stereoscopic view
CN103606149A (en) * 2013-11-14 2014-02-26 深圳先进技术研究院 Method and apparatus for calibration of binocular camera and binocular camera
CN103606149B (en) * 2013-11-14 2017-04-19 深圳先进技术研究院 Method and apparatus for calibration of binocular camera and binocular camera
CN103927747B (en) * 2014-04-03 2017-01-11 北京航空航天大学 Face matching space registration method based on human face biological characteristics
CN106462724B (en) * 2014-04-11 2019-08-02 北京市商汤科技开发有限公司 Method and system based on normalized images verification face-image
CN106462724A (en) * 2014-04-11 2017-02-22 北京市商汤科技开发有限公司 Methods and systems for verifying face images based on canonical images
CN105096376A (en) * 2014-04-30 2015-11-25 联想(北京)有限公司 Information processing method and electronic device
CN104112270A (en) * 2014-05-14 2014-10-22 苏州科技学院 Random point matching algorithm based on self-adaptive weight multiple-dimensioned window
CN104112270B (en) * 2014-05-14 2017-06-20 苏州科技学院 A kind of any point matching algorithm based on the multiple dimensioned window of adaptive weighting
CN104077804B (en) * 2014-06-09 2017-03-01 广州嘉崎智能科技有限公司 A kind of method based on multi-frame video picture construction three-dimensional face model
CN104126989A (en) * 2014-07-30 2014-11-05 福州大学 Foot surface three-dimensional information obtaining method based on multiple RGB-D cameras
WO2016070300A1 (en) * 2014-11-07 2016-05-12 Xiaoou Tang System and method for detecting genuine user
CN106937532A (en) * 2014-11-07 2017-07-07 北京市商汤科技开发有限公司 System and method for detecting actual user
CN106937532B (en) * 2014-11-07 2018-08-14 北京市商汤科技开发有限公司 System and method for detecting actual user
CN104408772A (en) * 2014-11-14 2015-03-11 江南大学 Grid projection-based three-dimensional reconstructing method for free-form surface
WO2016123913A1 (en) * 2015-02-04 2016-08-11 华为技术有限公司 Data processing method and apparatus
CN104665107A (en) * 2015-03-10 2015-06-03 南京脚度健康科技有限公司 Three-dimensional data acquisition and processing system and three-dimensional data acquisition and processing method for soles
CN104657561A (en) * 2015-03-10 2015-05-27 南京脚度健康科技有限公司 Shoe making method applying three-dimensional sole data
CN104700414A (en) * 2015-03-23 2015-06-10 华中科技大学 Rapid distance-measuring method for pedestrian on road ahead on the basis of on-board binocular camera
CN104700414B (en) * 2015-03-23 2017-10-03 华中科技大学 A kind of road ahead pedestrian's fast ranging method based on vehicle-mounted binocular camera
WO2017008226A1 (en) * 2015-07-13 2017-01-19 深圳大学 Three-dimensional facial reconstruction method and system
CN105354856A (en) * 2015-12-04 2016-02-24 北京联合大学 Human matching and positioning method and system based on MSER and ORB
CN105701455A (en) * 2016-01-05 2016-06-22 安阳师范学院 Active shape model (ASM) algorithm-based face characteristic point acquisition and three dimensional face modeling method
CN106127170A (en) * 2016-07-01 2016-11-16 重庆中科云丛科技有限公司 A kind of merge the training method of key feature points, recognition methods and system
CN106127170B (en) * 2016-07-01 2019-05-21 重庆中科云从科技有限公司 A kind of training method, recognition methods and system merging key feature points
CN106295530A (en) * 2016-07-29 2017-01-04 北京小米移动软件有限公司 Face identification method and device
CN106485453A (en) * 2016-10-31 2017-03-08 华电重工股份有限公司 A kind of stock ground management method and device
CN106683174A (en) * 2016-12-23 2017-05-17 上海斐讯数据通信技术有限公司 3D reconstruction method, apparatus of binocular visual system, and binocular visual system
CN106910222A (en) * 2017-02-15 2017-06-30 中国科学院半导体研究所 Face three-dimensional rebuilding method based on binocular stereo vision
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A kind of three-dimensional automatic Beijing Opera facial mask making-up method
CN106952221B (en) * 2017-03-15 2019-12-31 中山大学 Three-dimensional Beijing opera facial makeup automatic making-up method
CN107016640A (en) * 2017-04-06 2017-08-04 广州爱图互联网有限公司 Picture energy normalized processing method and system based on multi-resolution decomposition
GB2566050A (en) * 2017-08-31 2019-03-06 Imagination Tech Ltd Luminance-normalised colour spaces
CN108335664B (en) * 2018-01-19 2019-11-05 长春希达电子技术有限公司 The method that camera image curved surface is demarcated using two-dimentional relative movement mode
CN108062544A (en) * 2018-01-19 2018-05-22 百度在线网络技术(北京)有限公司 For the method and apparatus of face In vivo detection
CN108335664A (en) * 2018-01-19 2018-07-27 长春希达电子技术有限公司 The method that camera image curved surface is demarcated using two-dimentional relative movement mode
CN108810477A (en) * 2018-06-27 2018-11-13 江苏源控全程消防有限公司 For the method for comprehensive video training platform remote monitoring
CN109816791A (en) * 2019-01-31 2019-05-28 北京字节跳动网络技术有限公司 Method and apparatus for generating information
CN109920011A (en) * 2019-05-16 2019-06-21 长沙智能驾驶研究院有限公司 Outer ginseng scaling method, device and the equipment of laser radar and binocular camera

Similar Documents

Publication Publication Date Title
Valgaerts et al. Lightweight binocular facial performance capture under uncontrolled lighting.
US9569890B2 (en) Method and device for generating a simplified model of a real pair of spectacles
Han et al. High quality shape from a single rgb-d image under uncalibrated natural illumination
US9761060B2 (en) Parameterized model of 2D articulated human shape
US9830701B2 (en) Static object reconstruction method and system
Sun et al. Aerial 3D building detection and modeling from airborne LiDAR point clouds
Bogo et al. FAUST: Dataset and evaluation for 3D mesh registration
Wedel et al. Stereoscopic scene flow computation for 3D motion understanding
Gupta et al. Texas 3D face recognition database
Furukawa et al. Carved visual hulls for image-based modeling
Yoshizawa et al. Fast and robust detection of crest lines on meshes
Furukawa et al. Accurate, dense, and robust multiview stereopsis
Hirschmuller Stereo vision in structured environments by consistent semi-global matching
Xiao et al. Multiple view semantic segmentation for street view images
CN104077804B (en) A kind of method based on multi-frame video picture construction three-dimensional face model
Belhumeur et al. The bas-relief ambiguity
Katartzis et al. A stochastic framework for the identification of building rooftops using a single remote sensing image
CN103868460A (en) Parallax optimization algorithm-based binocular stereo vision automatic measurement method
Johnson et al. Registration and integration of textured 3D data
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
Hiep et al. Towards high-resolution large-scale multi-view stereo
CN102665086B (en) Method for obtaining parallax by using region-based local stereo matching
CN101398886B (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
CN107924579A (en) The method for generating personalization 3D head models or 3D body models
CN102075779B (en) Intermediate view synthesizing method based on block matching disparity estimation

Legal Events

Date Code Title Description
PB01 Publication
C06 Publication
SE01 Entry into force of request for substantive examination
C10 Entry into substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20101020