CN101866497A  Binocular stereo vision based intelligent threedimensional human face rebuilding method and system  Google Patents
Binocular stereo vision based intelligent threedimensional human face rebuilding method and system Download PDFInfo
 Publication number
 CN101866497A CN101866497A CN 201010209794 CN201010209794A CN101866497A CN 101866497 A CN101866497 A CN 101866497A CN 201010209794 CN201010209794 CN 201010209794 CN 201010209794 A CN201010209794 A CN 201010209794A CN 101866497 A CN101866497 A CN 101866497A
 Authority
 CN
 China
 Prior art keywords
 image
 point
 human face
 dimensional
 face
 Prior art date
Links
Abstract
Description
Technical field
The present invention relates to the technique of binocular stereoscopic vision field, relate in particular to a kind of intelligent threedimensional human face rebuilding method and system based on binocular stereo vision.
Background technology
In recent years, information and communication technology (ICT) has been dissolved into each department and all parts of our life, opened a beyond example world, here people are mutual with the carrying out of the electronic equipment that is embedded in responsive response user existence, with the intelligent building that provides the user to need is the trend that the computer assisted security system of feature is just becoming domestic research, needs the service of more complexity.Vision is direct, the most general mode that the mankind obtain external information.The final purpose of vision is to make significant explanation of observer and description scene, makes behavior planning based on these explanations and description and according to surrounding environment and observer's wish then.
Computer vision is meant the visual performance with the computer realization mankind, to threedimensional scenic perception, identification and the understanding of objective world.Computer vision is an important branch of computer science and artificial intelligence.Its research purpose and in have two aspects, the one, with the partial function of computer realization human vision; The 2nd, help to understand the mechanism of human vision thus.
The binocular solid coupling is used widely because of it and has been expanded to many research fields in recent years, as estimation, and the reconstruction of object construction and nearest 3D video coding.In the binocular solid system, the correction stereopicture is a crucial step to the calculating (or corresponding) of a parallax.In case this step, depth information is reconstruct easily, and then obtain the shape information of people's face accurately and reliably.It is remarkable that but parallax calculates, especially for facial image.Because the smooth diffuse reflection of people's face skin makes people's face have lower texture information.As a result, traditional based on the relevant solid matching method of gray scale may since bluring of corresponding result fail.And the performance of these methods is owing to some factors such as bad illumination or block and degenerate.Up to the present, some attempt being used for handling the right 3D human face rebuilding of stereopicture.
The initial results that obtains from traditional solid matching method is far different than real surface.As a result, the process that becomes more meticulous is indispensable.Yet this is not only timeconsuming but also calculate very costliness, and can't obtain satisfied result under most of situation.
Summary of the invention
The object of the present invention is to provide a kind of intelligent threedimensional human face rebuilding system and method based on binocular stereo vision.
On the one hand, the invention discloses a kind of intelligent threedimensional human face rebuilding method, comprise the steps: pretreatment step, facial image is comprised the pretreatment operation of image normalization, brightness normalization and image rectification based on binocular stereo vision; People's face detects and the feature point extraction step, detects the human face region that obtains through in the facial image of pretreatment operation, and the pedestrian's face characteristic point of going forward side by side extracts; The camera calibration step by projection matrix reconstruct object, obtains the inside and outside parameter of video camera, obtains the camera calibration result; Binocular solid coupling step based on described human face characteristic point, expands to colouring information with gray scale simple crosscorrelation coupling operator, and calculates the disparity map that threedimensional coupling generates according to the information that comprises polar curve constraint, human face region constraint and people's face geometric condition; The threedimensional facial reconstruction step according to described camera calibration result and the described threedimensional threedimensional coordinate that the disparity map that generates calculates people's face spatial hashing point cloud that mates, generates threedimensional face model.
Abovementioned intelligent threedimensional human face rebuilding method, in the preferred described pretreatment step, described image normalization is handled and is comprised: the rotation step, described facial image is rotated, make two maintenance levels in the described facial image; The mid point setup procedure, the mid point of adjusting the eyes line is positioned at the center of picture traverse, and described center is positioned at the fixed position on the height of described facial image; The scale transformation step is carried out scale transformation to described postrotational facial image, obtains standard picture of the same size.
Abovementioned intelligent threedimensional human face rebuilding method, in the preferred described pretreatment step, described brightness normalized comprises: testing image energy calculation procedure makes I that (i, j) expression i is capable, the grayscale value of j row pixel, the energy of calculating testing image The scale factor calculation step defines an average face, obtains the energy AveryEnergy of described average face; Determine proportional factor r atio according to following formula: Brightness normalization step, foundation Each pixel in the testing image is carried out the brightness normalized.
Abovementioned intelligent threedimensional human face rebuilding method, in the preferred described pretreatment step, the processing of described image rectification comprises: two width of cloth images are carried out the conversion of two dimension respectively; Original image at first passes through perspective transform, has been moved to infinity point in limit, and the polar curve bundle in the image carries out similarity transformation after just having become one group of parallel lines, the transverse axis of each polar curve and image coordinate system; In order to reduce distortion, carry out wrong contact transformation, make the pattern distortion of horizontal direction minimize.
Abovementioned intelligent threedimensional human face rebuilding method is in the preferred described pretreatment step, in described people's face detection and the feature point extraction step, described human face region is determined by colour of skin likelihood, comprise the steps: that a coloured image is transformed into the YCbCr space by rgb space, according to twodimentional Gauss model G (m, Λ ^{2}) changing a width of cloth coloured image into gray level image, grayscale value belongs to the possibility of skin area corresponding to this, gets threshold value by adaptive method then, gray level image further can be become bianry image g_ycbcr; Coloured image is transformed into the YIQ space by rgb space, extracts I component,, then be colour of skin point, otherwise be not colour of skin point, cut apart and obtain bianry image g_yiq when the value of the I component of pixel during at 5＜I＜80; Bianry image g_ycbcr and bianry image g_yiq obtain image g_skin as ANDoperation; G_skin makes closure operation to bianry image, and its structural element is 3 * 3 unit matrix; The Minimum Area area of setting people's face is 50 pixels, and filling area is less than the area of skin color of 50 pixels then.
Abovementioned intelligent threedimensional human face rebuilding method, in preferred described people's face detection and the feature point extraction step, described human face characteristic point extracts by active shape model and comprises: the eyes of opening or the eyes that closing are used general whole people's face shape template come whole face of initialization, obtain the apparent position of two tail of the eyes; Use local active shape model and estimate the profile of mouth, obtain the mouth true edge that obtains by the Canny operational character to mouth, if eye detection be opening and mouth detect and be O shape mouth, select to widen the view and the whole face template of O shape mouth is searched for whole facial contour.
Abovementioned intelligent threedimensional human face rebuilding method is in the preferred described camera calibration step, based on the inside and outside parameter of plane gridiron pattern label calibrating camera.
Abovementioned intelligent threedimensional human face rebuilding method, in the preferred described binocular solid coupling step, the described disparity map that calculates threedimensional coupling generation is realized by choosing with region growing of sub pixel, comprise: sub pixel is chosen step, select edge feature point to carry out region growing as sub pixel, after finishing as the region growing of sub pixel with edge feature point, select a pixel that does not belong to any growth district, in the search window at one dimension under the parallax constraint of edge feature point, utilize the relative coefficient formula to calculate the coupling cost function, choose the most reliable parallax, choosing this point is sub pixel, and establishing this parallax is regional disparity, the region growing step if do not find any match point less than coupling cost thresholding T given in advance, repeats this step to next neighbor pixel then; The region growing step is utilized the parallax value of sub pixel, calculates the coupling cost with sub pixel adjacent pixels point, the pixel that satisfies constraint condition is comprised into sub pixel region, otherwise abandon this point; Disparity map generates step, execution area growth step repeatedly, till not having the pixel that can remerge, such zone has just grown into, returning sub pixel chooses and repeats above step after step finds new sub pixel, after all pixels were labeled in the image, (i j) generated disparity map d.
Abovementioned intelligent threedimensional human face rebuilding method, in the preferred described threedimensional facial reconstruction step, after obtaining the threedimensional coordinate of people's face spatial hashing point cloud, comprise that also the threedimensional point cloud to people's face carries out triangulation, grid segmentation and grid optimization step: the triangulation step, the hash point is sorted, the point of search X coordinate minimum, establishing this point is v _{1}, according to v _{1}Square series arrangement each point that increases progressively of the distance of point forms sequence v _{1}, v _{2}, v _{3}..., v _{n}, with v _{1}With v _{2}The structure article one that links to each other limit is at v _{i}Sequential search and v in the sequence _{1}And v _{2}The point of conllinear is not remembered and is made v _{k}, then with v _{k}Insert v _{3}Before, move behind all the other dot sequencies, with v _{1}, v _{2}, v _{k}The grid cutting edge technology is adopted in 3 link to each other first triangle of formation and borders, initial mesh forward position next, and pointwise is outwards expanded, and forms initial people's face triangle gridding according to minimum interior angle maximal criterion; The grid fine division step adopts the Loop divided method, and this is a kind of segmentation based on the triangle control mesh, and the curved surface that this divided method generated is based on Bspline surface four times; The grid optimization step is adjusted the threedimensional face model that the internal node position is optimized with the Laplacian fairing processing.
On the other hand, the invention also discloses a kind of intelligent threedimensional human face rebuilding system based on binocular stereo vision, comprising: pretreatment module, people's face detect and feature point extraction module, camera calibration module, binocular solid matching module and threedimensional facial reconstruction module.Wherein, pretreatment module is used for facial image is comprised the pretreatment operation of image normalization, brightness normalization and image rectification, comprises image normalization submodule, brightness normalization submodule and image rectification submodule; People's face detects and the feature point extraction module is used for detecting the human face region that obtains through the facial image of pretreatment operation, and the pedestrian's face characteristic point of going forward side by side extracts; Camera calibration module is used for obtaining the inside and outside parameter of video camera by projection matrix reconstruct object, obtains the camera calibration result; The binocular solid matching module is used for based on described human face characteristic point, gray scale simple crosscorrelation coupling operator is expanded to colouring information, and calculate the disparity map that threedimensional coupling generates according to the information that comprises polar curve constraint, human face region constraint and people's face geometric condition; The threedimensional facial reconstruction module is used for calculating according to the disparity map of described camera calibration result and described threedimensional coupling generation the threedimensional coordinate of people's face spatial hashing point cloud, generates threedimensional face model.
Abovementioned intelligent threedimensional human face rebuilding system, in the preferred described pretreatment module, the image normalization submodule comprises: rotary unit, be used for described facial image is rotated, make two maintenance levels in the described facial image; The mid point adjustment unit, the mid point that is used to adjust the eyes line is positioned at the center of picture traverse, and described center is positioned at the fixed position on the height of described facial image; The scale transformation unit is used for described postrotational facial image is carried out scale transformation, obtains standard picture of the same size.
Abovementioned intelligent threedimensional human face rebuilding system, in the preferred described pretreatment module, described brightness normalization submodule comprises: the testing image energy calculation unit, be used to make I (i, j) expression i is capable, the grayscale value of j row pixel calculates the energy of testing image The scale factor calculation unit is used to define an average face, obtains the energy AveryEnergy of described average face; Determine proportional factor r atio according to following formula: Brightness normalization unit is used for foundation Each pixel in the testing image is carried out the brightness normalized.
Abovementioned intelligent threedimensional human face rebuilding system, in the preferred described pretreatment step, described image is corrected submodule and is used for: two width of cloth images are carried out the conversion of two dimension respectively; Original image at first passes through perspective transform, has been moved to infinity point in limit, and the polar curve bundle in the image carries out similarity transformation after just having become one group of parallel lines, the transverse axis of each polar curve and image coordinate system; In order to reduce distortion, carry out wrong contact transformation, make the pattern distortion of horizontal direction minimize.
Abovementioned intelligent threedimensional human face rebuilding system, preferred described people's face detect and the feature point extraction module in, described human face region is determined by colour of skin likelihood, being comprised: be used for coloured image is transformed into the YCbCr space by rgb space, according to twodimentional Gauss model G (m, Λ ^{2}) changing a width of cloth coloured image into gray level image, grayscale value belongs to the possibility of skin area corresponding to this, gets threshold value by adaptive method then, gray level image further can be become the unit of bianry image g_ycbcr; Be used for coloured image is transformed into the YIQ space by rgb space, extract I component,, then be colour of skin point, otherwise be not colour of skin point, cut apart the unit that obtains bianry image g_yiq when the value of the I component of pixel during at 5＜I＜80; Be used for bianry image g_ycbcr and bianry image g_yiq and obtain the unit of image g_skin as ANDoperation, be used for bianry image g_skin is made closure operation, its structural element is 3 * 3 unit matrix; The Minimum Area area of setting people's face is 50 pixels, and filling area is less than the unit of the area of skin color of 50 pixels then
Abovementioned intelligent threedimensional human face rebuilding system, in preferred described people's face detection and the feature point extraction module, described human face characteristic point extracts by active shape model, comprise: be used for that the eyes of opening or the eyes that closing are used general whole people's face shape template and come whole face of initialization, obtain the unit of the apparent position of two tail of the eyes; Be used to use local active shape model is estimated mouth to mouth profile, obtain mouth true edge by the acquisition of Canny operational character, if eye detection be opening and mouth detect and to be O shape mouth, select to widen the view and the whole face template of O shape mouth is searched for the unit of whole facial contour.
Abovementioned intelligent threedimensional human face rebuilding system is in the preferred described camera calibration module, based on the inside and outside parameter of plane gridiron pattern label calibrating camera.
Abovementioned intelligent threedimensional human face rebuilding system, in the preferred described binocular solid matching module, the described disparity map that calculates threedimensional coupling generation is realized by choosing with region growing of sub pixel, comprise: sub pixel is chosen the unit, be used to select the edge feature point to carry out region growing as sub pixel, after finishing as the region growing of sub pixel with edge feature point, select a pixel that does not belong to any growth district, in the search window at one dimension under the parallax constraint of edge feature point, utilize the relative coefficient formula to calculate the coupling cost function, choose the most reliable parallax, choosing this point is sub pixel, and establishing this parallax is regional disparity, the region growing step if do not find any match point less than coupling cost thresholding T given in advance, repeats this step to next neighbor pixel then; The region growing unit is used to utilize the parallax value of sub pixel, calculates the coupling cost with sub pixel adjacent pixels point, the pixel that satisfies constraint condition is comprised into sub pixel region, otherwise abandon this point; The disparity map generation unit, be used for execution area growth step repeatedly, till not having the pixel that can remerge, such zone has just grown into, returning sub pixel chooses and repeats above step after step finds new sub pixel, after all pixels were labeled in the image, (i j) generated disparity map d.
Abovementioned intelligent threedimensional human face rebuilding system, in the preferred described threedimensional facial reconstruction module, after obtaining the threedimensional coordinate of people's face spatial hashing point cloud, also comprise the unit that the threedimensional point cloud of people's face is carried out triangulation, grid segmentation and grid optimization: the triangulation unit, be used for the hash point is sorted, the point of search X coordinate minimum, establishing this point is v _{1}, according to v _{1}Square series arrangement each point that increases progressively of the distance of point forms sequence v _{1}, v _{2}, v _{3}..., v _{n}, with v _{1}With v _{2}The structure article one that links to each other limit is at v _{i}Sequential search and v in the sequence _{1}And v _{2}The point of conllinear is not remembered and is made v _{k}, then with v _{k}Insert v _{3}Before, move behind all the other dot sequencies, with v _{1}, v _{2}, v _{k}The grid cutting edge technology is adopted in 3 link to each other first triangle of formation and borders, initial mesh forward position next, and pointwise is outwards expanded, and forms initial people's face triangle gridding according to minimum interior angle maximal criterion; Grid segmentation unit is used to adopt the Loop divided method, and this is a kind of segmentation based on the triangle control mesh, and the curved surface that this divided method generated is based on Bspline surface four times; The grid optimization unit is used for adjusting the threedimensional face model that the internal node position is optimized with the Laplacian fairing processing.
In terms of existing technologies, the present invention rebuilds and has generated threedimensional face model true to nature Paint Gloss by the processing to facial image normalized, the detection of people's face and feature point extraction, camera calibration, binocular solid coupling and threedimensional facial reconstruction.
Description of drawings
Fig. 1 is the flow chart of steps of a kind of intelligent threedimensional human face rebuilding method embodiment based on the binocular tridimensional vision system of the present invention;
Fig. 2 is the mathematical model synoptic diagram of linear model video camera;
Fig. 3 a is a binocular vision threedimensional imaging principle schematic;
Fig. 3 b is a binocular vision threedimensional imaging principle schematic;
Fig. 4 is the structural representation based on the intelligent threedimensional human face rebuilding system of binocular stereo vision.
Embodiment
For abovementioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
Referring to figs. 1 through Fig. 3 b embodiments of the invention are described.
Come invention is further described below in conjunction with the drawings and specific embodiments.
As shown in Figure 1, at first, after utilizing the left and right Image Acquisition facial image of camera acquisition, facial image is carried out preservice, comprise image normalization, brightness normalization and image rectification, use colour of skin likelihood to estimate to find human face region then, and do human face characteristic point with active shape model (ASM) and extract, this helps using the priori of human face structure.During camera calibration, by projection matrix reconstruct object, obtain the inside and outside parameter of video camera, reduce computation complexity.By gray scale simple crosscorrelation coupling operator is expanded to colouring information, can partly mate then lowtexture image such as people's face skin.Increase threedimensional matching accuracy at matching stage in order to reduce the region of search, we consider that in matching algorithm polar curve constraint, human face region retrain and people's face geometric condition information is calculated parallax.At last, the disparity map that generates according to camera calibration result and threedimensional coupling calculates the threedimensional coordinate of people's face spatial hashing point cloud, to the threedimensional point cloud of people's face carry out triangulation, grid segments and fairing processing, produces smooth threedimensional face model true to nature.
Below concrete implementation process piecemeal is described;
The facial image normalized:
Because the people's face detection based on the image likelihood score is to depend on the correlativity of gradation of image on space distribution, therefore needs carry out series of preprocessing to facial image, to reach the purpose of position correction and gray scale normalizing.At first, carry out the image rotation, make two line maintenance levels of people's face, guaranteed the consistance of people's face direction.Secondly, make the mid point of the line of eyes in people's face be positioned at the center of picture traverse, and this mid point is positioned at fixing position on the height of image, guaranteed the consistance of the position of people's face.The 3rd, to the image zoom conversion, obtain the unified calibration image of size.
In addition because image can be subjected to the illumination effect of different directions inevitably, make in the image a certain side of people's face brighter or darker, regular meeting's influence detects or identification.For the influence of eliminating illumination and don't the proportionate relationship that changes the brightness of each pixel in the image, we have used the notion of image energy.The energy of image is defined as the quadratic sum of each grey scale pixel value in the image.For the image energy that makes each zone to be measured is identical, can get an average face earlier, obtain its energy averyEnergy, then divided by the image energy Energy in each zone to be measured ^{2}, so, the ENERGY E nergy=Energy of the image normalization that each is to be measured ^{2}* ratio has identical energy, and just the grayscale value of the pixel of each testing image multiply by the square root of this ratio, promptly
Wish to obtain such result in the solid coupling: p sets up an office _{1}=(x _{1}, y _{1}, 1) and be a bit on the left image, then its corresponding polar curve equation on right image is y=y _{1}, corresponding point are p _{2}=(x _{2}, y _{2}, 1).Therefrom as can be seen, image need improve the efficient of coupling through overcorrect.
The process of image rectification is carried out the conversion of two dimension exactly respectively to two width of cloth images., promptly
U＝U _{s}U _{r}U _{p}
The execution sequence of conversion is from left to right.Original image at first passes through perspective transform.At this moment, limit has been moved to infinity point, and the polar curve bundle in the image has just become one group of parallel lines.Carry out similarity transformation then, at this moment, the transverse axis of each polar curve and image coordinate system, promptly each polar curve keeps level.At last,, carry out wrong contact transformation, make the pattern distortion of horizontal direction minimize in order to reduce distortion.
Detect with right people's face based on the colour of skin:
We select the YCbCr color model, and the coloured image of importing is carried out color space conversion.It is transformed into the mutual incoherent YCbCr color space of color component from the higher rgb space of correlativity.Conversion formula is:
The YIQ color space derives from NTSC (NTSC), the monochrome information of Y component representative image wherein, two components of I, Q then carry colouring information, the change color of I component representative from orange to cyan, and Q component is then represented from purple to yellowish green change color.Wherein I component has been contained the color gamut of people's the colour of skin substantially, and it is the highest to the susceptibility of the color of a few class skins.People's face detection key step based on colour of skin likelihood is as follows:
(1) coloured image is transformed into the YCbCr space by rgb space.According to twodimentional Gauss model G (m, Λ ^{2}) changing a width of cloth coloured image into gray level image, grayscale value belongs to the possibility of skin area corresponding to this, gets threshold value by adaptive method then, gray level image further can be become bianry image g_ycbcr.
(2) coloured image is transformed into the YIQ space by rgb space, extracts I component,, then be colour of skin point, otherwise be not colour of skin point, cut apart and obtain bianry image g_yiq when the value of the I component of pixel during at 5＜I＜80.
(3) bianry image g_ycbcr and bianry image g_yiq obtain image g_skin as ANDoperation.
(4) bianry image g_skin is made closure operation, its structural element is 3 * 3 unit matrix, and filling area is less than the area of skin color (the Minimum Area area of setting people's face is 50 pixels) of 50 pixels then.
Facial characteristics point extracts:
Invisible point under blocking or not global shape and the texture constraint estimation of check point by using active appearance models (AAM).We define the human face characteristic point position mainly is around eyes, nose, eyebrow, mouth and location, people's face edge.These points provide the common shape information of any people's face.At first we come whole face of initialization to the general whole people's face shape template of opening of eyes (or the eyes that closing) application, therefore obtain the apparent position of two tail of the eyes.Then, we use local active shape model (ASM) and estimate that to mouth the profile of mouth obtains the mouth true edge that is obtained by the Canny operational character.If eye detection be opening and mouth detect and to be O shape mouth, select so to widen the view and the whole face template of O shape mouth is searched for whole facial contour; Other situation is similar.Make full use of the multiresolution search, we obtain the whole facial contour when ASM restrains or reach maximum convergence number of times.64 unique points are positioned at the people on the face automatically altogether.
Camera calibration:
As shown in Figure 2, extract the 3D information of people's face, must realize Camera calibration, promptly utilize camera model to obtain each inside and outside parameter of video camera according to one group of known condition from stereoscopic face image centering.Relatively Chang Yong camera model is the pin hole perspective model.
We adopt the camera marking method based on plane gridiron pattern label.Calculate for simplifying, establishing plane, template place is the Z of world coordinate system _{w}=0 plane.The i column vector of representing R with ri.Have for the point on the stencil plane so:
Wherein, (u is v) with (X _{w}, Y _{w}, Z _{w}) be the coordinate of spatial point under image coordinate system and world coordinate system, f _{x}, f _{y}, u _{0}, v _{0}Be intrinsic parameters of the camera, rotation matrix R and translation vector t describe the relation between video camera and the world coordinate system, are referred to as the external parameter of video camera.Order H=A[r _{1}r _{2}T]=[h _{1}h _{2}h _{3}], then following formula can be written as:
Wherein, A is an intrinsic parameters of the camera, is a point on the template and a mapping between its picture point.If the volume coordinate and the image coordinate of known template point are by separating least squares equation and with the further refinement of LevenbergMarquarat algorithm, just can asking the inside and outside parameter of video camera in view of the above in the hope of the H matrix.Utilize Choleski to decompose and just can obtain confidential reference items, utilize matrix A and H then, ask the outer parameter of camera, formula is as follows:
The pattern distortion meeting produces bigger influence to calibration result.Usually, demarcate the main radial distortion of considering, ignore other distortion factor.Here, we adopt the method for Zhang to handle the distortion problem.
The binocular solid coupling:
The selection of the size of match window and similarity measurement operator will directly influence the precision and the efficient of coupling.Weigh the advantages and disadvantages, this patent compromise selects 5 * 5 windows to carry out characteristic matching.And in the selection of similarity operator, should on the basis of comparing its effect, select appropriate operators.We select relative coefficient (cross correlation function) r (d for use _{x}, d _{y}) as the similarity measurement operator, its computing formula is shown below:
Wherein With All pixel grey scale mean values in two zones of being mated, the unique point with maximum related value are exactly the matching characteristic point.
The constraint condition that we adopt is as follows:
1) human face region constraint: utilize the accurate human face region that obtains previously, add the polar curve constraint and just can improve a lot on the performance searching in the small area very much.
2) people's face symmetric constraints: such as, any among the left figure is in the left side of people's face, and then corresponding match point in right figure can only be in the left side of people's face, if the point that matches is on the right side, then one is decided to be the mistake match point, need delete, again coupling is done like this and has also further been reduced the hunting zone.
3) maximum disparity constraint: the limit search scope is within certain parallax value.
Two committed steps based on the region growing Stereo Matching Algorithm are:
1) selects the one group of sub pixel that can correctly represent desired zone;
2) determine the criterion that in growth course, neighbor can be included.
Here, we set that neighbor is merged to the criterion in zone at sub pixel place is as follows: for the point adjacent with sub pixel, with the parallax value d substitution coupling cost function of sub pixel, if satisfy e (i, j, d)＜T, wherein T is a coupling cost thresholding given in advance.So just neighbor is included, at this moment all pixels that merge to the sub pixel region have identical parallax value, and promptly the whole growth zone has identical parallax value.Abovementioned criterion can be described as:
In fact, this algorithm can be understood as the Stereo Matching Algorithm based on the parallax growth.Specific algorithm is as follows:
Choosing of sub pixel: during initialization, we select the edge feature point to carry out region growing as sub pixel.After finishing as the region growing of sub pixel with edge feature point, select a pixel that does not belong to any growth district, in the search window at one dimension under the parallax constraint of edge feature point, utilize relative coefficient (cross correlation function) formula to calculate the coupling cost function, choose the most reliable parallax.Choosing this point is sub pixel, and to establish this parallax be regional disparity, carries out for second step then.If do not find any match point, next neighbor pixel repeated this step less than coupling cost thresholding T given in advance.
The growth in zone: utilize the parallax value of sub pixel, calculate coupling cost with sub pixel adjacent pixels point.If satisfy following formula, then this pixel is comprised into sub pixel region, otherwise abandon this point.
Disparity map generates: carried out for second step repeatedly, till not having the pixel that can remerge, such zone has just grown into.Return and repeat above step after the first step finds new sub pixel.After all pixels are labeled in the image, then finish algorithm.At this moment, (i j) generates disparity map d.
Threedimensional facial reconstruction
In this patent, the binocular tridimensional vision system that we adopt is made up of two identical video cameras, and each respective shaft of their coordinate systems is parallel, i.e. the parallel optical axis model.If focus of camera is f, the distance (baseline) between two photocentres is b, is world coordinate system with left camera coordinates, and the world coordinates of spatial point P is that (Z), the coordinate under the left and right cameras coordinate system is respectively (x for X, Y _{1}, y _{1}, z _{1}) and (x _{2}, y _{2}, z _{2}), the coordinate on the plane of delineation of the left and right sides is (u _{1}, v _{1}) and (u _{2}, v _{2}), parallax d=u wherein _{1}u _{2}
Thus, but the threedimensional coordinate of computer memory point:
Following formula is contacted directly the degree of depth Z of threedimensional body and parallax d, and the degree of depth has reflected threedimensional spatial information.If known base line and focal length are determined just can calculate the Z coordinate that P is ordered behind the parallax d, and then are obtained world coordinates X and Y that P is ordered.Like this, we use following formula to obtain the threedimensional coordinate of all match points of people's face stereo image pair.In order to obtain depth map Paint Gloss, we used 5 * 5 median filter that parallax d is carried out smoothing processing before compute depth Z.
After the threedimensional coordinate of all match points all calculated, we can directly show these discrete points in three dimensions, but in order to obtain the threedimensional face model of the sense of reality, we are necessary to carry out the triangulation of threedimensional data points.If we directly carry out triangulation in three dimensions, position problems that so must three consecutive point of room for discussion promptly can not comprise other point in the spheroid that is constituted with these three points.More complicated under the circumstances, we carry out the trigonometric ratio of subpoint earlier with the mantoman two dimensional surface zone that projects to of threedimensional data points on 2 dimensional region, then the triangulation of implementation space point basically of the corresponding relation by point.In order to obtain the triangular mesh of highquality threedimensional face model, we carry out following operation to people's face hash number strong point:
Step 1, triangulation: hash point cloud is sorted, the point of search X coordinate minimum, establishing this point is v _{1}, according to v _{1}Square series arrangement each point that increases progressively of the distance of point forms sequence v _{1}, v _{2}, v _{3}... v _{n}With v _{1}With v _{2}The structure article one that links to each other limit is at v _{i}Sequential search and v in the sequence _{1}And v _{2}The point of conllinear is not remembered and is made v _{k}, then with v _{k}Insert v _{3}Before, move behind all the other dot sequencies, with v _{1}, v _{2}, v _{k}3 link to each other first triangle of formation and borders, initial mesh forward position.Next adopt the grid cutting edge technology, pointwise is outwards expanded, and forms initial people's face triangle gridding according to minimum interior angle maximal criterion.
Step 2, the grid segmentation: adopt the Loop divided method, this is a kind of segmentation based on the triangle control mesh, and the curved surface that this divided method generated is based on Bspline surface four times.Proved that now subdivision curved surface reaches C at the regular point place ^{2}Continuously, reach C at the singular point place ^{1}Continuously, have characteristics such as subdivision rules is simple, segmentation back slickness is good.The Loop algorithm of subdivision is an approach type 14 splittingup method, basic thought is to be divided into two sections behind node of new insertion on legofmutton each limit, former vertex of a triangle will be replaced by new node, so a triangular element will be replaced by four little triangular elements.The insertion point createrule is as follows:
(1) if internal edges has two vertex v _{0}And v _{1}, two triangles sharing this limit are (v _{0}, v _{1}, v _{2}) and (v _{0}, v _{1}, v _{3}), then new edge point v _{E}For:
(2) if the 1neighborhood summit of internal vertex v is v _{i}(i=0,1 ..., n1), then newlygenerated summit is v _{V}:
Wherein, β is the adjoint point weights.
(3) if two summits of boundary edge are v _{0}And v _{1}, then new edge point v _{E}For:
(4) if border vertices v is v at borderline two adjacent vertexs _{0}And v _{1}, then newlygenerated summit is:
Calculate the edge point v on each bar limit of grid according to the createrule of insertion point _{E}Vertex v with each summit _{V}, new edge point and new summit are coupled together, generate a new triangle gridding, to the last converge on limit curved surface.
Step 3 grid optimization: the threedimensional face model that is optimized with Laplacian fairing technological adjustment internal node position.If the triangular mesh vertex v, v _{i}(i=0,1 ..., n1) be and n the grid vertex of its adjacency that then single order, the second order Laplacian operator definitions to vertex v is:
We use iterative formula to the summit:
Wherein, Here n _{i}And n _{i}, j is respectively culminating point p _{i}With its indegree of j consecutive point.
On the other hand, the present invention also provides a kind of intelligent threedimensional human face rebuilding system based on binocular stereo vision, and this system comprises:
As shown in Figure 4, pretreatment module 40 is used for facial image is comprised the pretreatment operation of image normalization, brightness normalization and image rectification; People's face detects and feature point extraction module 41, is used for detecting the human face region that obtains through the facial image of pretreatment operation, and the pedestrian's face characteristic point of going forward side by side extracts; Camera calibration module 42 is used for obtaining the inside and outside parameter of video camera by projection matrix reconstruct object, obtains the camera calibration result; Binocular solid matching module 43 is used for based on described human face characteristic point, gray scale simple crosscorrelation coupling operator is expanded to colouring information, and calculate the disparity map that threedimensional coupling generates according to the information that comprises polar curve constraint, human face region constraint and people's face geometric condition; Threedimensional facial reconstruction module 44 is used for the threedimensional coordinate according to the disparity map calculating people face spatial hashing point cloud of described camera calibration result and described threedimensional coupling generation, generates threedimensional face model.
Principle based on the intelligent threedimensional human face rebuilding system of binocular stereo vision is similar to method embodiment, mutually with reference to getting final product, does not repeat them here each other.
More than a kind of intelligent threedimensional human face rebuilding method and system based on binocular stereo vision provided by the present invention described in detail, used specific embodiment herein principle of the present invention and embodiment are set forth, the explanation of above embodiment just is used for helping to understand method of the present invention and core concept thereof; Simultaneously, for one of ordinary skill in the art, according to thought of the present invention, part in specific embodiments and applications all can change.In sum, this description should not be construed as limitation of the present invention.
Claims (18)
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN 201010209794 CN101866497A (en)  20100618  20100618  Binocular stereo vision based intelligent threedimensional human face rebuilding method and system 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN 201010209794 CN101866497A (en)  20100618  20100618  Binocular stereo vision based intelligent threedimensional human face rebuilding method and system 
Publications (1)
Publication Number  Publication Date 

CN101866497A true CN101866497A (en)  20101020 
Family
ID=42958211
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN 201010209794 CN101866497A (en)  20100618  20100618  Binocular stereo vision based intelligent threedimensional human face rebuilding method and system 
Country Status (1)
Country  Link 

CN (1)  CN101866497A (en) 
Cited By (48)
Publication number  Priority date  Publication date  Assignee  Title 

CN102509334A (en) *  20110921  20120620  北京捷成世纪科技股份有限公司  Method for converting virtual 3D (ThreeDimensional) scene into 3D view 
CN102521586A (en) *  20111208  20120627  中国科学院苏州纳米技术与纳米仿生研究所  Highresolution threedimensional face scanning method for camera phone 
CN102592309A (en) *  20111226  20120718  北京工业大学  Modeling method of nonlinear threedimensional face 
CN102799879A (en) *  20120712  20121128  中国科学技术大学  Method for identifying multilanguage multifont characters from natural scene image 
CN103093498A (en) *  20130125  20130508  西南交通大学  Threedimensional human face automatic standardization method 
CN103218612A (en) *  20130513  20130724  苏州福丰科技有限公司  3D (ThreeDimensional) face recognition method 
WO2013134998A1 (en) *  20120312  20130919  中兴通讯股份有限公司  3d reconstruction method and system 
CN103323209A (en) *  20130702  20130925  清华大学  Structural modal parameter identification system based on binocular stereo vision 
CN103325140A (en) *  20120319  20130925  中兴通讯股份有限公司  Threedimensional reconstruction method and system 
CN103424070A (en) *  20120523  20131204  鸿富锦精密工业（深圳）有限公司  Curved face coordinate system setup system and method 
CN103440674A (en) *  20130613  20131211  厦门美图网科技有限公司  Method for rapidly generating crayon special effect of digital image 
CN103530599A (en) *  20130417  20140122  Tcl集团股份有限公司  Method and system for distinguishing real face and picture face 
CN103606149A (en) *  20131114  20140226  深圳先进技术研究院  Method and apparatus for calibration of binocular camera and binocular camera 
CN103778598A (en) *  20121017  20140507  株式会社理光  Method and device for disparity map improving 
CN103942558A (en) *  20130122  20140723  日电（中国）有限公司  Method and apparatus for obtaining object detectors 
CN103971356A (en) *  20130204  20140806  腾讯科技（深圳）有限公司  Street scene image segmenting method and device based on parallax information 
CN104112270A (en) *  20140514  20141022  苏州科技学院  Random point matching algorithm based on selfadaptive weight multipledimensioned window 
CN104126989A (en) *  20140730  20141105  福州大学  Foot surface threedimensional information obtaining method based on multiple RGBD cameras 
CN104183011A (en) *  20130527  20141203  万克林  Threedimensional interactive virtual reality (3D IVR) restoring system 
CN104408772A (en) *  20141114  20150311  江南大学  Grid projectionbased threedimensional reconstructing method for freeform surface 
WO2015067071A1 (en) *  20131105  20150514  深圳市云立方信息科技有限公司  Method and device for converting virtual view into stereoscopic view 
CN104641395A (en) *  20121024  20150520  索尼公司  Image processing device and image processing method 
CN104657561A (en) *  20150310  20150527  南京脚度健康科技有限公司  Shoe making method applying threedimensional sole data 
CN104665107A (en) *  20150310  20150603  南京脚度健康科技有限公司  Threedimensional data acquisition and processing system and threedimensional data acquisition and processing method for soles 
CN104700414A (en) *  20150323  20150610  华中科技大学  Rapid distancemeasuring method for pedestrian on road ahead on the basis of onboard binocular camera 
CN105096376A (en) *  20140430  20151125  联想(北京)有限公司  Information processing method and electronic device 
CN105354856A (en) *  20151204  20160224  北京联合大学  Human matching and positioning method and system based on MSER and ORB 
WO2016070300A1 (en) *  20141107  20160512  Xiaoou Tang  System and method for detecting genuine user 
CN105701455A (en) *  20160105  20160622  安阳师范学院  Active shape model (ASM) algorithmbased face characteristic point acquisition and three dimensional face modeling method 
WO2016123913A1 (en) *  20150204  20160811  华为技术有限公司  Data processing method and apparatus 
CN106127170A (en) *  20160701  20161116  重庆中科云丛科技有限公司  A kind of merge the training method of key feature points, recognition methods and system 
CN106295530A (en) *  20160729  20170104  北京小米移动软件有限公司  Face identification method and device 
CN103927747B (en) *  20140403  20170111  北京航空航天大学  Face matching space registration method based on human face biological characteristics 
WO2017008226A1 (en) *  20150713  20170119  深圳大学  Threedimensional facial reconstruction method and system 
CN106462724A (en) *  20140411  20170222  北京市商汤科技开发有限公司  Methods and systems for verifying face images based on canonical images 
CN104077804B (en) *  20140609  20170301  广州嘉崎智能科技有限公司  A kind of method based on multiframe video picture construction threedimensional face model 
CN106485453A (en) *  20161031  20170308  华电重工股份有限公司  A kind of stock ground management method and device 
CN106683174A (en) *  20161223  20170517  上海斐讯数据通信技术有限公司  3D reconstruction method, apparatus of binocular visual system, and binocular visual system 
CN106910222A (en) *  20170215  20170630  中国科学院半导体研究所  Face threedimensional rebuilding method based on binocular stereo vision 
CN106952221A (en) *  20170315  20170714  中山大学  A kind of threedimensional automatic Beijing Opera facial mask makingup method 
CN107016640A (en) *  20170406  20170804  广州爱图互联网有限公司  Picture energy normalized processing method and system based on multiresolution decomposition 
CN107092857A (en) *  20110927  20170825  株式会社斯巴鲁  Image processing apparatus 
CN108062544A (en) *  20180119  20180522  百度在线网络技术（北京）有限公司  For the method and apparatus of face In vivo detection 
CN108335664A (en) *  20180119  20180727  长春希达电子技术有限公司  The method that camera image curved surface is demarcated using twodimentional relative movement mode 
CN108810477A (en) *  20180627  20181113  江苏源控全程消防有限公司  For the method for comprehensive video training platform remote monitoring 
GB2566050A (en) *  20170831  20190306  Imagination Tech Ltd  Luminancenormalised colour spaces 
CN109816791A (en) *  20190131  20190528  北京字节跳动网络技术有限公司  Method and apparatus for generating information 
CN109920011A (en) *  20190516  20190621  长沙智能驾驶研究院有限公司  Outer ginseng scaling method, device and the equipment of laser radar and binocular camera 
Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

JP2002269559A (en) *  20010306  20020920  Toshiba Corp  Templatematching method of image, and image processing device 
US20040179750A1 (en) *  20020910  20040916  Takeshi Kumakura  Image processing apparatus, image processing method and image processing program for magnifying an image 
CN1617175A (en) *  20041209  20050518  上海交通大学  Human limb threedimensional model building method based on labelling point 

2010
 20100618 CN CN 201010209794 patent/CN101866497A/en not_active Application Discontinuation
Patent Citations (3)
Publication number  Priority date  Publication date  Assignee  Title 

JP2002269559A (en) *  20010306  20020920  Toshiba Corp  Templatematching method of image, and image processing device 
US20040179750A1 (en) *  20020910  20040916  Takeshi Kumakura  Image processing apparatus, image processing method and image processing program for magnifying an image 
CN1617175A (en) *  20041209  20050518  上海交通大学  Human limb threedimensional model building method based on labelling point 
NonPatent Citations (2)
Title 

《中国科技信息》 20060831 段新涛 等 基于肤色和图像似然度的人脸检测 第142143页 118 , 第15期 2 * 
《智能系统学报》 20091231 贾贝贝 等 双目立体视觉的三维人脸重建方法 第513520页 118 第4卷, 第6期 2 * 
Cited By (67)
Publication number  Priority date  Publication date  Assignee  Title 

CN102509334A (en) *  20110921  20120620  北京捷成世纪科技股份有限公司  Method for converting virtual 3D (ThreeDimensional) scene into 3D view 
CN102509334B (en) *  20110921  20140212  北京捷成世纪科技股份有限公司  Method for converting virtual 3D (ThreeDimensional) scene into 3D view 
CN107092857A (en) *  20110927  20170825  株式会社斯巴鲁  Image processing apparatus 
CN102521586A (en) *  20111208  20120627  中国科学院苏州纳米技术与纳米仿生研究所  Highresolution threedimensional face scanning method for camera phone 
CN102521586B (en) *  20111208  20140312  中国科学院苏州纳米技术与纳米仿生研究所  Highresolution threedimensional face scanning method for camera phone 
CN102592309B (en) *  20111226  20140507  北京工业大学  Modeling method of nonlinear threedimensional face 
CN102592309A (en) *  20111226  20120718  北京工业大学  Modeling method of nonlinear threedimensional face 
WO2013134998A1 (en) *  20120312  20130919  中兴通讯股份有限公司  3d reconstruction method and system 
CN103325140A (en) *  20120319  20130925  中兴通讯股份有限公司  Threedimensional reconstruction method and system 
CN103424070A (en) *  20120523  20131204  鸿富锦精密工业（深圳）有限公司  Curved face coordinate system setup system and method 
CN102799879B (en) *  20120712  20140402  中国科学技术大学  Method for identifying multilanguage multifont characters from natural scene image 
CN102799879A (en) *  20120712  20121128  中国科学技术大学  Method for identifying multilanguage multifont characters from natural scene image 
CN103778598B (en) *  20121017  20160803  株式会社理光  Disparity map ameliorative way and device 
CN103778598A (en) *  20121017  20140507  株式会社理光  Method and device for disparity map improving 
CN104641395A (en) *  20121024  20150520  索尼公司  Image processing device and image processing method 
CN104641395B (en) *  20121024  20180814  索尼公司  Image processing equipment and image processing method 
CN103942558A (en) *  20130122  20140723  日电（中国）有限公司  Method and apparatus for obtaining object detectors 
CN103093498A (en) *  20130125  20130508  西南交通大学  Threedimensional human face automatic standardization method 
CN103971356A (en) *  20130204  20140806  腾讯科技（深圳）有限公司  Street scene image segmenting method and device based on parallax information 
CN103971356B (en) *  20130204  20170908  腾讯科技（深圳）有限公司  Street view image Target Segmentation method and device based on parallax information 
CN103530599A (en) *  20130417  20140122  Tcl集团股份有限公司  Method and system for distinguishing real face and picture face 
CN103218612A (en) *  20130513  20130724  苏州福丰科技有限公司  3D (ThreeDimensional) face recognition method 
CN104183011A (en) *  20130527  20141203  万克林  Threedimensional interactive virtual reality (3D IVR) restoring system 
CN103440674A (en) *  20130613  20131211  厦门美图网科技有限公司  Method for rapidly generating crayon special effect of digital image 
CN103440674B (en) *  20130613  20160622  厦门美图网科技有限公司  A kind of rapid generation of digital picture wax crayon specially good effect 
CN103323209B (en) *  20130702  20160406  清华大学  Based on the structural modal parameter identification system of binocular stereo vision 
CN103323209A (en) *  20130702  20130925  清华大学  Structural modal parameter identification system based on binocular stereo vision 
US9704287B2 (en)  20131105  20170711  Shenzhen Cloud Cube Information Tech Co., Ltd.  Method and apparatus for achieving transformation of a virtual view into a threedimensional view 
WO2015067071A1 (en) *  20131105  20150514  深圳市云立方信息科技有限公司  Method and device for converting virtual view into stereoscopic view 
CN103606149A (en) *  20131114  20140226  深圳先进技术研究院  Method and apparatus for calibration of binocular camera and binocular camera 
CN103606149B (en) *  20131114  20170419  深圳先进技术研究院  Method and apparatus for calibration of binocular camera and binocular camera 
CN103927747B (en) *  20140403  20170111  北京航空航天大学  Face matching space registration method based on human face biological characteristics 
CN106462724B (en) *  20140411  20190802  北京市商汤科技开发有限公司  Method and system based on normalized images verification faceimage 
CN106462724A (en) *  20140411  20170222  北京市商汤科技开发有限公司  Methods and systems for verifying face images based on canonical images 
CN105096376A (en) *  20140430  20151125  联想(北京)有限公司  Information processing method and electronic device 
CN104112270A (en) *  20140514  20141022  苏州科技学院  Random point matching algorithm based on selfadaptive weight multipledimensioned window 
CN104112270B (en) *  20140514  20170620  苏州科技学院  A kind of any point matching algorithm based on the multiple dimensioned window of adaptive weighting 
CN104077804B (en) *  20140609  20170301  广州嘉崎智能科技有限公司  A kind of method based on multiframe video picture construction threedimensional face model 
CN104126989A (en) *  20140730  20141105  福州大学  Foot surface threedimensional information obtaining method based on multiple RGBD cameras 
WO2016070300A1 (en) *  20141107  20160512  Xiaoou Tang  System and method for detecting genuine user 
CN106937532A (en) *  20141107  20170707  北京市商汤科技开发有限公司  System and method for detecting actual user 
CN106937532B (en) *  20141107  20180814  北京市商汤科技开发有限公司  System and method for detecting actual user 
CN104408772A (en) *  20141114  20150311  江南大学  Grid projectionbased threedimensional reconstructing method for freeform surface 
WO2016123913A1 (en) *  20150204  20160811  华为技术有限公司  Data processing method and apparatus 
CN104665107A (en) *  20150310  20150603  南京脚度健康科技有限公司  Threedimensional data acquisition and processing system and threedimensional data acquisition and processing method for soles 
CN104657561A (en) *  20150310  20150527  南京脚度健康科技有限公司  Shoe making method applying threedimensional sole data 
CN104700414A (en) *  20150323  20150610  华中科技大学  Rapid distancemeasuring method for pedestrian on road ahead on the basis of onboard binocular camera 
CN104700414B (en) *  20150323  20171003  华中科技大学  A kind of road ahead pedestrian's fast ranging method based on vehiclemounted binocular camera 
WO2017008226A1 (en) *  20150713  20170119  深圳大学  Threedimensional facial reconstruction method and system 
CN105354856A (en) *  20151204  20160224  北京联合大学  Human matching and positioning method and system based on MSER and ORB 
CN105701455A (en) *  20160105  20160622  安阳师范学院  Active shape model (ASM) algorithmbased face characteristic point acquisition and three dimensional face modeling method 
CN106127170A (en) *  20160701  20161116  重庆中科云丛科技有限公司  A kind of merge the training method of key feature points, recognition methods and system 
CN106127170B (en) *  20160701  20190521  重庆中科云从科技有限公司  A kind of training method, recognition methods and system merging key feature points 
CN106295530A (en) *  20160729  20170104  北京小米移动软件有限公司  Face identification method and device 
CN106485453A (en) *  20161031  20170308  华电重工股份有限公司  A kind of stock ground management method and device 
CN106683174A (en) *  20161223  20170517  上海斐讯数据通信技术有限公司  3D reconstruction method, apparatus of binocular visual system, and binocular visual system 
CN106910222A (en) *  20170215  20170630  中国科学院半导体研究所  Face threedimensional rebuilding method based on binocular stereo vision 
CN106952221A (en) *  20170315  20170714  中山大学  A kind of threedimensional automatic Beijing Opera facial mask makingup method 
CN106952221B (en) *  20170315  20191231  中山大学  Threedimensional Beijing opera facial makeup automatic makingup method 
CN107016640A (en) *  20170406  20170804  广州爱图互联网有限公司  Picture energy normalized processing method and system based on multiresolution decomposition 
GB2566050A (en) *  20170831  20190306  Imagination Tech Ltd  Luminancenormalised colour spaces 
CN108335664B (en) *  20180119  20191105  长春希达电子技术有限公司  The method that camera image curved surface is demarcated using twodimentional relative movement mode 
CN108062544A (en) *  20180119  20180522  百度在线网络技术（北京）有限公司  For the method and apparatus of face In vivo detection 
CN108335664A (en) *  20180119  20180727  长春希达电子技术有限公司  The method that camera image curved surface is demarcated using twodimentional relative movement mode 
CN108810477A (en) *  20180627  20181113  江苏源控全程消防有限公司  For the method for comprehensive video training platform remote monitoring 
CN109816791A (en) *  20190131  20190528  北京字节跳动网络技术有限公司  Method and apparatus for generating information 
CN109920011A (en) *  20190516  20190621  长沙智能驾驶研究院有限公司  Outer ginseng scaling method, device and the equipment of laser radar and binocular camera 
Similar Documents
Publication  Publication Date  Title 

Valgaerts et al.  Lightweight binocular facial performance capture under uncontrolled lighting.  
US9569890B2 (en)  Method and device for generating a simplified model of a real pair of spectacles  
Han et al.  High quality shape from a single rgbd image under uncalibrated natural illumination  
US9761060B2 (en)  Parameterized model of 2D articulated human shape  
US9830701B2 (en)  Static object reconstruction method and system  
Sun et al.  Aerial 3D building detection and modeling from airborne LiDAR point clouds  
Bogo et al.  FAUST: Dataset and evaluation for 3D mesh registration  
Wedel et al.  Stereoscopic scene flow computation for 3D motion understanding  
Gupta et al.  Texas 3D face recognition database  
Furukawa et al.  Carved visual hulls for imagebased modeling  
Yoshizawa et al.  Fast and robust detection of crest lines on meshes  
Furukawa et al.  Accurate, dense, and robust multiview stereopsis  
Hirschmuller  Stereo vision in structured environments by consistent semiglobal matching  
Xiao et al.  Multiple view semantic segmentation for street view images  
CN104077804B (en)  A kind of method based on multiframe video picture construction threedimensional face model  
Belhumeur et al.  The basrelief ambiguity  
Katartzis et al.  A stochastic framework for the identification of building rooftops using a single remote sensing image  
CN103868460A (en)  Parallax optimization algorithmbased binocular stereo vision automatic measurement method  
Johnson et al.  Registration and integration of textured 3D data  
CN104484648B (en)  Robot variable visual angle obstacle detection method based on outline identification  
Hiep et al.  Towards highresolution largescale multiview stereo  
CN102665086B (en)  Method for obtaining parallax by using regionbased local stereo matching  
CN101398886B (en)  Rapid threedimensional face identification method based on bieye passiveness stereo vision  
CN107924579A (en)  The method for generating personalization 3D head models or 3D body models  
CN102075779B (en)  Intermediate view synthesizing method based on block matching disparity estimation 
Legal Events
Date  Code  Title  Description 

PB01  Publication  
C06  Publication  
SE01  Entry into force of request for substantive examination  
C10  Entry into substantive examination  
RJ01  Rejection of invention patent application after publication 
Application publication date: 20101020 