CN101339669A - Three-dimensional human face modelling approach based on front side image - Google Patents

Three-dimensional human face modelling approach based on front side image Download PDF

Info

Publication number
CN101339669A
CN101339669A CNA2008100411323A CN200810041132A CN101339669A CN 101339669 A CN101339669 A CN 101339669A CN A2008100411323 A CNA2008100411323 A CN A2008100411323A CN 200810041132 A CN200810041132 A CN 200810041132A CN 101339669 A CN101339669 A CN 101339669A
Authority
CN
China
Prior art keywords
face
image
texture
gray
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2008100411323A
Other languages
Chinese (zh)
Inventor
马燕
祁抗抗
王映波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Normal University
University of Shanghai for Science and Technology
Original Assignee
Shanghai Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Normal University filed Critical Shanghai Normal University
Priority to CNA2008100411323A priority Critical patent/CN101339669A/en
Publication of CN101339669A publication Critical patent/CN101339669A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The invention relates to the technical field of computer graphics and image processing, in particular to a 3D human face modeling method based on front silhouette, which mainly models the 3D human face in a fully automatic manner. The method comprises the following steps: a general human face grid model is built by 113 feature points and 184 tri patch information arrays which are connected with the 113 feature points based on Candide parameterization; 2D position information and depth information of a specific human face are determined by virtue of human eyes location algorithm and side nose tip location algorithm by inputting two 2D images, the general face model is modified and transformed into a specific human face model; an interpolating is calculated for the human body patches by Delaunay Triangulation Technology and texture mapping is carried out by the Bilinear Interpolation Technology, thus causing the human face image to be appeared more delicate and accurate; and the method solves the problems such as low efficiency, high calculation expense, and the like, existing in the prior art.

Description

Three-dimensional face modeling method based on positive silhouette
Technical field
The present invention relates to the computer graphic image processing technology field, specifically refer to a kind of method of the three-dimensional face modeling based on positive silhouette.
Background technology
Along with the continuous development of computer graphics, three-dimensional manufacturing technology is also developed rapidly and is widely applied to social production, various fields in life.Nowadays computer network high speed development and come into our daily life is especially for the 3-D technology development with use unlimited living space is provided.Along with the increasingly extensive of network application popularized, from talking Face to face, making a phone call, to Internet chat, send e-mails mutually, people more and more trend towards exchanging in virtual network world.People's face is the human important channel that exchange mutually, is the carrier of complex expression such as human pleasure, anger, sorrow, happiness and language.Along with computer graphics in modeling, play up and the development of aspect such as real-time animation, modeling of people's face and animation have been obtained at aspects such as film virtual role, teleconference, criminology, medical science, information help, man-machine interaction, amusement, virtual reality, recognition of face and expression understandings widely and have been used.The excellent properties performance of computing machine aspect the processing graphics image obtained the people's of a lot of imaginative and creativities attention, thereby promoted the drafting of real more virtual environment.In order to reflect the activity of user in virtual world, improve user's feeling of immersion therein,---complete graphic entity that looks like true man by computer representation---is true to nature more good more just to require the visual human, and its matter of utmost importance is exactly the establishment of moulding of people's face and expression animation.In view of the above, a variety of dimensional Modeling Technology are arranged at present, but great majority all are based on interactively method, need to carry out choosing of key point by the professional, therefore, work efficiency is lower, simultaneously, need choose up to a hundred key points mostly, thereby reach visually satisfied result based on interactively dimensional Modeling Technology, so also there is the big problem of computing cost in this method.
Summary of the invention
The present invention is the above-mentioned disappearance that exists based on interactively method of solution, and the three-dimensional face modeling method of the full-automatic mode of employing that proposes.A kind of three-dimensional face modeling method based on positive silhouette, it comprises the three-dimensional specific people's face modeling based on quadrature image (Orthogonal Images) and three-dimensional general faceform (Generic Face Model).On the initial basis that the faceform is more coarse, dough sheet is less, utilize the triangulation technology, realize exquisitenessization and polishing, and raise the efficiency compromise with computing cost specific faceform.At last, synthesized three-dimensional specific faceform more true to nature.It comprises: the Candide-3 model that the present invention uses is formed the general face wire frame model of structure by 113 summits and 184 tri patchs, and build input by two bidimensional images, determine the two-dimensional position information and the depth information of specific people's face by human eye location algorithm and side nose location algorithm, thereby general faceform is revised, be transformed into specific faceform; By people's face sheet being carried out interpolation calculation, make people's face look fine and smooth more and accurate thereby reach based on shape analysis.
Description of drawings
Fig. 1 is a Candide3 faceform XY structural representation of the present invention;
Fig. 2 is a Candide3 faceform YZ structural representation of the present invention;
Fig. 3 is based on the triangular facet subdivision synoptic diagram of shape analysis;
Fig. 4 cylindrical surface projecting schematic diagram.
Embodiment
Below in conjunction with accompanying drawing the present invention is further described
1 sets up the parameterized common people's face of Candide grid
The present invention adopts Ahlberg to be distributed on up-to-date version Candide-3 faceform in calendar year 2001.This version faceform has reached the unification with the MPEG-4 standard, model comprises 113 key points and 168 tri patchs, controlled variable by static parameter (FacialDefinition Parameters, FDP) and dynamic parameter (Facial AnimationParameters FAP) forms.The present invention uses the Candide-3 model to be made up of 113 summits and 184 tri patchs.
2 general faceforms are to specific faceform's conversion
2.1 image pre-service
In the reality, two width of cloth images (photo) of acquisition can not be big or small the same, so before feature point for calibration, at first will align silhouette (photo) and carry out standardization processing, the head size of two width of cloth images (photo) equated.
2.2 specific face characteristic dot generation
1) border, people's face left and right sides determines
If handled image be I (x, y), its size is M * N, then the vertical Gray Projection function of this image is:
PV ( x ) = Σ y = 1 N I ( x , y )
In the formula, PV is called vertical Gray Projection curve.Observe the vertical Gray Projection curve of different single humanoid figure pictures, can will make vertical Gray Projection curve form a protruding peak in finder's face region with certain width.The border, the left and right sides at this protruding peak roughly is exactly the border, the left and right sides of people's face, and this is that human face region often has higher brightness because compare with background.At people's face left and right sides boundary, the summation of brightness value reduces rapidly on the vertical direction, thereby forms a tangible protruding peak.Therefore, only need to determine the border, the left and right sides at main protruding peak in the vertical Gray Projection curve, can obtain the border, the left and right sides of people's face.In order to remove The noise, vertical Gray Projection curve is carried out smoothing processing with median filter method, the smoothing processing function is:
PVF ( x ) = 1 K + 1 Σ i = x - K / 2 x + K / 2 PV ( i )
In the formula, K is the width of filter window, and its value is relevant with the size of people's face in image, and here, the K value is 6.
2) the human eye horizontal level estimates
After the border, the left and right sides of people's face was determined, the size of establishing border, the left and right sides was m, and then Ci Shi image size is m * N, with quadrat method to the horizontal Gray Projection function of this image-region was:
PH ( x ) = Σ x = 1 m I ( x , y )
PHF ( x ) = 1 L + 1 Σ i = y - L / 2 y + L / 2 PH ( i )
The value of L is relevant with the size of people's face in image, gets L=4 in the experiment.
3) estimation in human eye segmentation threshold interval and accurately location
Obtain the looks area I 0After, only surplus eyebrow and the eyes that need be distinguished can therefrom separate human eye by seeking optimal segmenting threshold.So-called optimal segmenting threshold refers to can be with iris, pupil, go up the gray scale segmentation threshold that eye frame and eyebrow obviously separate.Because the gray-scale value at iris, pupil and last eye frame position is obviously low than its adjacent domain (circumference of eyes skin) gray-scale value, therefore can cut apart iris, pupil and go up that the gray-scale value at eye frame position always exists, and be not unique, exist certain continuously among a small circle.
Estimate initial threshold T 0, need detect first peak dot of smoothed histogram.Because the histogram after level and smooth two peak dots seldom occur in continuous 10 gray levels, so its preceding 250 gray levels all can be divided into 25 intervals, obtain the minimum and maximum value of 10 gray levels in each interval, search for the position of first peak dot by low gray area to high gray area then, estimate initial threshold T thus 0
4) adjust threshold value automatically and determine eye position
T 0An initial threshold that human eye might be split from image is provided, but optimal segmenting threshold not necessarily.In order to find optimal segmenting threshold, can select a suitable step-length T Step, determine optimal threshold by increasing step-length.At first, use T 0To image I 0Binaryzation judges that whether the human eye black patch occurs, if black patch do not occur, adds a step-length T on the basis of last threshold value Step, occur up to the eyes black patch, and black block in the binary image is carried out mark, calculate every area (pixel count), determine every occupied rectangle position and wide height.Utilize above-mentioned segmentation threshold to increase progressively, detect and locate the whole process of eyes, wherein try to achieve initial threshold T 0=60, get T in the experiment Step=15, when threshold value T=105, the eyes black patch is detected, thereby reaches the accurate location of human eye.
3 Delaunay triangulations based on shape analysis
Specific people's face be one by 113 unique points and 184 three-dimensional face grids that tri patch is formed.Specifically, this people's face grid is also far from the target of three-dimensional face model true to nature.In order to make synthetic specific people's face more and more true to nature, method of the present invention is by the tri patch in specific faceform's rough region is carried out subdivision, insert non-unique point then in current specific faceform's vertex set, carry out interpolation iterative process next time again, circulation has enough visual vivid degree up to this zone successively.If Pi_tris set stored all dough sheets in the dissatisfied zone of visual effect, the step of its triangulation is as follows:
1) shape of calculating tri patch, adopt diverse ways to carry out the subdivision operation then:
At first calculate longest edge E as front piece Ti 1, vice-minister limit E 2, minor face E 3Length is respectively e I1, e I2And e I3Calculate:
r 1 = e i 1 e i 2 , r 2 = e i 2 e i 3 , r 3 = e i 1 e i 3
2) for three kinds of different shapes, take different processing modes:
As 1.0≤r 1≤ H ∩ 1.0≤r 2≤ H ∩ 1.0≤r 3In the time of≤H, H gets 1.3 in the experiment, at this moment the subdivision algorithm such as the accompanying drawing 3 (a) of tri patch.Wherein, E, F and G point are respectively limit E 1, E 2And E 2Mid point.With E, F and G join among the non-unique point set NNF that subdivision produces as non-unique point, use as the non-unique point in the interpolation iterative process next time, and mark is with AC in markedt simultaneously, and BC and AB are the limit in other tri patchs on limit.The ABC that deletes patch in the dough sheet set adds dough sheet AEF, BEG, and FCG then.
As 1.0≤r 1≤ H ∩ 1.0≤r 2≤ H ∩ 1.0≤r 3When≤H can not satisfy, for the sake of simplicity, we directly found longest edge E 1Mid point E, judge then and work as r 2〉=H 1The time, then triangulation such as Fig. 3 (b).Experimental selection H 1=1.6 effects are best.This moment, the F point was limit E 2Mid point.With B, C, E, F spot projection utilize Delaunay trigonometric ratio algorithm then on XOY plane, return two tri patch BEC and ECF behind the trigonometric ratio.Same non-unique point E and the F of adding adds the limit among dough sheet and the mark markedt.
As 1.0≤r 1≤ H ∩ 1.0≤r 2≤ H ∩ 1.0≤r 3≤ H can not satisfy, and r 2<H 1The time, the triangulation of this moment is shown in accompanying drawing 3 (c).Illustrate that tri patch is longer and narrower this moment, and we find longest edge E 1Mid point E, then the E point is added among the NNF as non-unique point.Be the limit of adding among dough sheet and the mark markedt then.
3) concrete subdivision process is as follows: select each the tri patch Ti among the Pi_tris successively, at first in the first round processing, temporary transient two kinds of tri patchs not considering shown in accompanying drawing 3a and accompanying drawing 3b, the subdivision first round is handled all tri patchs with shape shown in the accompanying drawing 3c.Judge the mark situation on Ti each limit in markedt,, then use the method subdivision shown in accompanying drawing 3a certainly if minor face or vice-minister limit have been labeled.If do not have the limit to be labeled or longest edge is labeled, then use method subdivision shown in accompanying drawing 3c.Second takes turns in the processing, considers the tri patch of two kinds of shapes shown in accompanying drawing 3a and accompanying drawing 3b.Equally, if minor face is labeled, then use the method subdivision shown in accompanying drawing 3a certainly.If minor face is not labeled, then look a certain method subdivision of the concrete shape of tri patch in selecting shown in accompanying drawing 3a or accompanying drawing 4b.
4 texture
The step of carrying out texture mapping may be summarized to be:
Definition texture image, control texture, explanation texture mapping mode, definition texture coordinate.The definition texture image comprises form, the texture image data type of the length of resolution, texture image of given texture image and width, texture image, the memory address of texture image.
The present invention adopts cylindrical surface projecting to realize mapping between the position of the coordinate of three-dimensional face model and 2 d texture picture.
The cylindrical surface projecting principle is drawn a ray as shown in Figure 4 through 1 P of body surface from the center of cylinder, this ray and periphery intersect at a P ' (u, v), some P ' puts the projection of P on cylinder.The cylinder radius r is known.U in the cylindrical coordinates, v represent coordinate components and the ray and the formed in the counterclockwise direction deflection angle of Z axle negative sense of P ' in the vertical direction respectively.If cylindrical center's coordinate is (x 0, y, z 0), can get the cylindrical surface projecting formula by accompanying drawing 4 and be:
u = y v = arctan x 0 - x z 0 - z , ( x &le; x 0 , z < z 0 ) &pi; 2 , ( x &le; x 0 , z = z 0 ) &pi; + arctan x 0 - x z 0 - z , ( z > z 0 ) 3 &pi; 2 , ( x > x 0 , z = z 0 ) 2 &pi; + arctan x 0 - x z 0 - z , ( x > x 0 , z < z 0 )
For ease of subsequent treatment, the face of cylinder is launched into the plane, can be further cylindrical coordinates (u v) is converted to size and is the coordinate in the rectangle plane of W * H (x ', y '), and conversion formula is:
x &prime; = W &times; v 2 &pi; y &prime; = H &times; u - y min y max - y min
The border is: u=y Min, y '=0; U=y Max, y '=H; V=0, x '=0; V=2 π, x '=W.In order to extract the information on positive and the silhouette (photo) comprehensively, the texture value that defines certain some P is this weighted mean corresponding to the texture value (colouring information) on the positive side projection.If the texture value of P corresponding point in front projection is I f, the texture value of corresponding point is I on side projection p, then the texture value I of this point is:
I=kI f+(1-k)I p
K is a weight factor, and when three-dimensional point only had corresponding point in front projection, k got 1; When three-dimensional point only had corresponding point on side projection, k got 0; When three-dimensional point had both had corresponding point in front projection, when on side projection, corresponding point being arranged again, need get a suitable k value.Because front projection is perpendicular to the Z axle, the area of establishing a bin on the three-dimensional model is S, and the normal vector of this bin is n (n x, n y, n z), this bin area S in front projection then fCan try to achieve by following formula:
S f=|n z|S
Can get S with method p=| n x| S.So k can be taken as:
k = S f S f + S p = | n z | | n z | + | n x |
As previously mentioned, the value that can get the texture weight factor k of every some P is:
Figure A20081004113200153
In sum, the present invention has realized the three-dimensional specific people's face modeling based on quadrature image (Orthogonal Images) and three-dimensional general faceform (Generic Face Model).On the initial basis that the faceform is more coarse, dough sheet is less, utilize the triangulation technology, realize specific faceform's exquisitenessization and polishing, and implementation efficiency and computing cost is compromise.At last, synthesized three-dimensional specific faceform more true to nature.

Claims (6)

1. three-dimensional face modeling method based on positive silhouette, comprise: by the parameterized general face wire frame model of Candide is to make up basic people's face grid by 113 unique points with 184 tri patchs that are connected these points, specific face characteristic dot generation, general faceform is to specific faceform's conversion, the specifiable lattice model is carried out interpolation generate meticulousr grid model, use the pixel of X-Y scheme that grid is carried out pinup picture and carries out the texture step, it is characterized in that the generation of described specific human face characteristic point mainly is to adopt based on the eye locating method of Gray Projection function and nose location algorithm eyes and the nose to bidimensional image to position.
2. the three-dimensional face modeling method based on positive silhouette as claimed in claim 1 is characterized in that, described employing positions based on the eye locating method of Gray Projection function and nose location algorithm eyes and the nose to bidimensional image, comprising:
(1) image pre-service;
(2) specific face characteristic dot generation:
1) border, people's face left and right sides determines
If handled image is I:x, y, its size is M * N, then the vertical Gray Projection function of this image is:
PV ( x ) = &Sigma; y = 1 N I ( x , y )
In the formula, PV is called vertical Gray Projection curve, observe the vertical Gray Projection curve of different single humanoid figure pictures, can will make vertical Gray Projection curve form a protruding peak in finder's face region with certain width, the border, the left and right sides at this protruding peak roughly is exactly the border, the left and right sides of people's face, this is that human face region often has higher brightness because compare with background;
At people's face left and right sides boundary, the summation of brightness value reduces rapidly on the vertical direction, thereby forms a tangible protruding peak;
Therefore, only need to determine the border, the left and right sides at main protruding peak in the vertical Gray Projection curve, can obtain the border, the left and right sides of people's face; In order to remove The noise, vertical Gray Projection curve is carried out smoothing processing with median filter method, the smoothing processing function is:
PVF ( x ) = 1 K + 1 &Sigma; i = x - K / 2 x + K / 2 PV ( i )
In the formula, K is the width of filter window, and its value is relevant with the size of people's face in image, and here, the K value is 6;
2) the human eye horizontal level estimates
After the border, the left and right sides of people's face was determined, the size of establishing border, the left and right sides was m, and then Ci Shi image size is m * N, with quadrat method to the horizontal Gray Projection function of this image-region was:
PH ( x ) = &Sigma; x = 1 m I ( x , y )
PHF ( x ) = 1 L + 1 &Sigma; i = y - L / 2 y + L / 2 PH ( i )
The value of L is relevant with the size of people's face in image, gets L=4 in the experiment;
3) estimation in human eye segmentation threshold interval and accurately location
Obtain the looks area I 0After, only surplus eyebrow and the eyes that need be distinguished, can human eye therefrom be separated by seeking optimal segmenting threshold, so-called optimal segmenting threshold refers to can be with iris, pupil, go up the gray scale segmentation threshold that eye frame and eyebrow obviously separate, because the gray-scale value at iris, pupil and last eye frame position is obviously than its adjacent domain--circumference of eyes skin gray-scale value is low, therefore can cut apart iris, pupil and go up that the gray-scale value at eye frame position always exists, and be not unique, exist certain continuously among a small circle;
Estimate initial threshold T 0Detect first peak dot of smoothed histogram, because the histogram after level and smooth two peak dots seldom occur in continuous 10 gray levels, so its preceding 250 gray levels all can be divided into 25 intervals, obtain the minimum and maximum value of 10 gray levels in each interval, search for the position of first peak dot by low gray area to high gray area then, estimate initial threshold T thus 0
4) adjust threshold value automatically and determine eye position
T 0Provide one the initial threshold that human eye can be split from image is arranged, but optimal segmenting threshold not necessarily in order to find optimal segmenting threshold, can be selected a step-length T Step, determine optimal threshold by increasing step-length;
At first, use T 0To image I 0Binaryzation judges that whether the human eye black patch occurs, if black patch do not occur, adds a step-length T on the basis of last threshold value Step, occur up to the eyes black patch, and black block in the binary image is carried out mark, calculate every area--pixel count, determine every occupied rectangle position and wide height;
Utilize above-mentioned segmentation threshold to increase progressively, detect and locate the whole process of eyes, wherein try to achieve initial threshold T 0=60, get T in the experiment Step=15, when threshold value T=105, the eyes black patch is detected, thereby reaches the accurate location of human eye.
3. the three-dimensional face modeling method based on positive silhouette as claimed in claim 1 or 2, it is characterized in that: described general faceform is generation according to specific human face characteristic point to specific faceform's conversion, mainly be to adopt based on the eye locating method of Gray Projection function and nose location algorithm the characteristic point position that the eyes and the nose of bidimensional image positions was carried out convergent-divergent to the integral grid size earlier in proportion with original sizing grid, be the reference point translation with the nose again, make the coordinate transform one-tenth of original mesh and two dimensional image pixel coordinate identical substantially;
Perhaps further utilize the information in the eye location algorithm that the basic grid that coincide is finely tuned, it is further coincide.
4. as the arbitrary described three-dimensional face modeling method of claim 1-3 based on positive silhouette, it is characterized in that: described the specifiable lattice model is carried out interpolation, generate meticulousr grid model, main method is based on the Delaunay triangulation technology of shape analysis; By the analysis of 184 dough sheets, make and form a large amount of points in each dough sheet again, use the Delaunay function among the MATLAB to carry out triangulation, unique point and dough sheet quantity after the interpolation are reached more than 30000, to obtain people's face grid of exquisiteness more;
The step of triangulation is as follows:
1) shape of calculating tri patch adopts diverse ways to carry out the subdivision operation then, at first calculates the longest edge E as front piece Ti 1, vice-minister limit E 2, minor face E 3Length is respectively e Il, e I2And e I3Calculate:
r 1 = e i 1 e i 2 , r 2 = e i 2 e i 3 , r 3 = e i 1 e i 3
2) for three kinds of different shapes, take different processing modes:
As 1.0≤r 1≤ H ∩ 1.0≤r 2≤ H ∩ 1.0≤r 3In the time of≤H, H gets 1.3 in the experiment, this moment tri patch the subdivision algorithm, wherein, E, F and G point are respectively limit E 1, E 2And E 3Mid point;
With E, F and G join among the non-unique point set NNF that subdivision produces as non-unique point, use as the non-unique point in the interpolation iterative process next time, and mark is with AC in markedt simultaneously, and BC and AB are the limit in other tri patchs on limit; The ABC that deletes patch in the dough sheet set adds dough sheet AEF, BEG, and FCG then;
As 1.0≤r 1≤ H ∩ 1.0≤r 2≤ H ∩ 1.0≤r 3When≤H can not satisfy, for the sake of simplicity, directly find longest edge E 1Mid point E, judge then and work as r 2〉=H 1The time, then triangulation shown in accompanying drawing 3 (b), experimental selection H 1=1.6 effects are best, and this moment, the F point was limit E 2Mid point, with B, C, E, F spot projection utilize Delaunay trigonometric ratio algorithm then on XOY plane, return two tri patch BEC and ECF behind the trigonometric ratio;
Same non-unique point E and the F of adding adds the limit among dough sheet and the mark markedt;
As 1.0≤r 1≤ H ∩ 1.0≤r 2≤ H ∩ 1.0≤r 3≤ H can not satisfy, and r 2<H 1The time, illustrate that tri patch is longer and narrower this moment, finds longest edge E 1Mid point E, then the E point is added among the NNF as non-unique point; Be the limit of adding among dough sheet and the mark markedt then;
3) concrete subdivision process is as follows: select each the tri patch Ti among the Pi_tris successively, at first in the first round processing, temporarily do not consider two kinds of tri patchs, the subdivision first round is handled all tri patchs with shape shown in the accompanying drawing 3c;
Judge the mark situation on Ti each limit in markedt,, then use method subdivision shown in the accompanying drawing 3a certainly if minor face or vice-minister limit have been labeled; If do not have the limit to be labeled or longest edge is labeled, then use method subdivision shown in the accompanying drawing 3c; Second takes turns in the processing, considers the tri patch of two kinds of shapes shown in accompanying drawing 3a and the accompanying drawing 3b;
Equally, if minor face is labeled, then use method subdivision shown in the accompanying drawing 3a;
If minor face is not labeled, then look a certain method subdivision of the concrete shape of tri patch in selecting shown in accompanying drawing 3a or the accompanying drawing 4b;
5. as the arbitrary described three-dimensional face modeling method of claim 1-4 based on positive silhouette, it is characterized in that: the pixel of described use X-Y scheme is carried out pinup picture to grid, and carry out texture, the color of the pixel of the bidimensional image that employing is corresponding is attached to the point on the grid, pass through bilinear interpolation--Bilinear Interpolation technology is carried out texture, determine earlier 4 pixels near pixel, adopt the difference that reduces the texture scale block diagram to mend then and calculate, its result determines the color of this point.
6. as the arbitrary described three-dimensional face modeling method based on positive silhouette of claim 1-5, it is characterized in that: the step that described texture is carried out texture mapping may be summarized to be:
Definition texture image, control texture, explanation texture mapping mode, definition texture coordinate;
The definition texture image comprises form, the texture image data type of the length of resolution, texture image of given texture image and width, texture image, the memory address of texture image;
Mapping between the coordinate of employing cylindrical surface projecting realization three-dimensional face model and the position of 2 d texture picture.
CNA2008100411323A 2008-07-29 2008-07-29 Three-dimensional human face modelling approach based on front side image Pending CN101339669A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2008100411323A CN101339669A (en) 2008-07-29 2008-07-29 Three-dimensional human face modelling approach based on front side image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2008100411323A CN101339669A (en) 2008-07-29 2008-07-29 Three-dimensional human face modelling approach based on front side image

Publications (1)

Publication Number Publication Date
CN101339669A true CN101339669A (en) 2009-01-07

Family

ID=40213729

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2008100411323A Pending CN101339669A (en) 2008-07-29 2008-07-29 Three-dimensional human face modelling approach based on front side image

Country Status (1)

Country Link
CN (1) CN101339669A (en)

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101916454A (en) * 2010-04-08 2010-12-15 董洪伟 Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN102157010A (en) * 2011-05-25 2011-08-17 上海大学 Method for realizing three-dimensional facial animation based on layered modeling and multi-body driving
CN102222363A (en) * 2011-07-19 2011-10-19 杭州实时数码科技有限公司 Method for fast constructing high-accuracy personalized face model on basis of facial images
CN103034861A (en) * 2012-12-14 2013-04-10 北京航空航天大学 Identification method and device for truck brake shoe breakdown
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN101996415B (en) * 2009-08-28 2013-10-09 珠海金联安警用技术研究发展中心有限公司 Three-dimensional modeling method for eyeball
CN103606190A (en) * 2013-12-06 2014-02-26 上海明穆电子科技有限公司 Method for automatically converting single face front photo into three-dimensional (3D) face model
CN104969240A (en) * 2013-02-27 2015-10-07 索尼公司 Method and system for image processing
CN106780713A (en) * 2016-11-11 2017-05-31 吴怀宇 A kind of three-dimensional face modeling method and system based on single width photo
CN106778474A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 3D human body recognition methods and equipment
WO2018076437A1 (en) * 2016-10-25 2018-05-03 宇龙计算机通信科技(深圳)有限公司 Method and apparatus for human facial mapping
CN108564619A (en) * 2018-04-25 2018-09-21 厦门大学 A kind of sense of reality three-dimensional facial reconstruction method based on two photos
CN109087340A (en) * 2018-06-04 2018-12-25 成都通甲优博科技有限责任公司 A kind of face three-dimensional rebuilding method and system comprising dimensional information
CN109191510A (en) * 2018-07-09 2019-01-11 研靖信息科技(上海)有限公司 A kind of the 3D method for reconstructing and its device of pathological section
CN109191557A (en) * 2018-09-11 2019-01-11 中国科学院国家天文台 The image texture mapping method and device of relief map
CN109325437A (en) * 2018-09-17 2019-02-12 北京旷视科技有限公司 Image processing method, device and system
CN109493312A (en) * 2018-09-01 2019-03-19 哈尔滨工程大学 A kind of image partition method based on BEC prediction model
CN109509179A (en) * 2018-10-24 2019-03-22 深圳市旭东数字医学影像技术有限公司 Eyeball and lenticular automatic division method and system based on medical image
CN110060348A (en) * 2019-04-26 2019-07-26 北京迈格威科技有限公司 Facial image shaping methods and device
CN110675413A (en) * 2019-09-27 2020-01-10 腾讯科技(深圳)有限公司 Three-dimensional face model construction method and device, computer equipment and storage medium
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium
CN111081375A (en) * 2019-12-27 2020-04-28 北京深测科技有限公司 Early warning method and system for health monitoring
CN111179210A (en) * 2019-12-27 2020-05-19 浙江工业大学之江学院 Method and system for generating texture map of face and electronic equipment
CN112308962A (en) * 2020-11-05 2021-02-02 山东产研信息与人工智能融合研究院有限公司 Real scene model construction method and device with entity target as minimum unit
CN112950818A (en) * 2021-03-23 2021-06-11 德施曼机电(中国)有限公司 Intelligent lock management system and method for mechanical key projection recognition
CN113011393A (en) * 2021-04-25 2021-06-22 中国民用航空飞行学院 Human eye positioning method based on improved hybrid projection function
CN113450460A (en) * 2021-07-22 2021-09-28 四川川大智胜软件股份有限公司 Phase-expansion-free three-dimensional face reconstruction method and system based on face shape space distribution
US11398044B2 (en) 2018-04-12 2022-07-26 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for face modeling and related products
CN114974042A (en) * 2022-06-21 2022-08-30 北京神州泰业科技发展有限公司 Method and system for projecting projection onto object surface to enhance reality effect
CN116051782A (en) * 2022-11-30 2023-05-02 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) Data processing and reconstruction modeling method, device and storage medium based on orthogonal grid curve interpolation

Cited By (47)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101996415B (en) * 2009-08-28 2013-10-09 珠海金联安警用技术研究发展中心有限公司 Three-dimensional modeling method for eyeball
CN101916454B (en) * 2010-04-08 2013-03-27 董洪伟 Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN101916454A (en) * 2010-04-08 2010-12-15 董洪伟 Method for reconstructing high-resolution human face based on grid deformation and continuous optimization
CN102157010A (en) * 2011-05-25 2011-08-17 上海大学 Method for realizing three-dimensional facial animation based on layered modeling and multi-body driving
CN102222363A (en) * 2011-07-19 2011-10-19 杭州实时数码科技有限公司 Method for fast constructing high-accuracy personalized face model on basis of facial images
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN103034861A (en) * 2012-12-14 2013-04-10 北京航空航天大学 Identification method and device for truck brake shoe breakdown
CN103034861B (en) * 2012-12-14 2016-12-21 北京航空航天大学 The recognition methods of a kind of truck brake shoe breakdown and device
CN104969240B (en) * 2013-02-27 2017-10-24 索尼公司 Method and system for image procossing
CN104969240A (en) * 2013-02-27 2015-10-07 索尼公司 Method and system for image processing
CN103606190A (en) * 2013-12-06 2014-02-26 上海明穆电子科技有限公司 Method for automatically converting single face front photo into three-dimensional (3D) face model
CN103606190B (en) * 2013-12-06 2017-05-10 上海明穆电子科技有限公司 Method for automatically converting single face front photo into three-dimensional (3D) face model
WO2018076437A1 (en) * 2016-10-25 2018-05-03 宇龙计算机通信科技(深圳)有限公司 Method and apparatus for human facial mapping
CN106780713A (en) * 2016-11-11 2017-05-31 吴怀宇 A kind of three-dimensional face modeling method and system based on single width photo
CN106778474A (en) * 2016-11-14 2017-05-31 深圳奥比中光科技有限公司 3D human body recognition methods and equipment
US11398044B2 (en) 2018-04-12 2022-07-26 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method for face modeling and related products
CN108564619B (en) * 2018-04-25 2021-05-14 厦门大学 Realistic three-dimensional face reconstruction method based on two photos
CN108564619A (en) * 2018-04-25 2018-09-21 厦门大学 A kind of sense of reality three-dimensional facial reconstruction method based on two photos
CN109087340A (en) * 2018-06-04 2018-12-25 成都通甲优博科技有限责任公司 A kind of face three-dimensional rebuilding method and system comprising dimensional information
CN109191510A (en) * 2018-07-09 2019-01-11 研靖信息科技(上海)有限公司 A kind of the 3D method for reconstructing and its device of pathological section
CN109191510B (en) * 2018-07-09 2020-05-15 研境信息科技(上海)有限公司 3D reconstruction method and device for pathological section
CN109493312A (en) * 2018-09-01 2019-03-19 哈尔滨工程大学 A kind of image partition method based on BEC prediction model
CN109493312B (en) * 2018-09-01 2021-10-26 哈尔滨工程大学 Image segmentation method based on BEC prediction model
CN109191557B (en) * 2018-09-11 2023-05-02 中国科学院国家天文台 Image texture mapping method and device for stereoscopic topographic map
CN109191557A (en) * 2018-09-11 2019-01-11 中国科学院国家天文台 The image texture mapping method and device of relief map
CN109325437A (en) * 2018-09-17 2019-02-12 北京旷视科技有限公司 Image processing method, device and system
CN109325437B (en) * 2018-09-17 2021-06-22 北京旷视科技有限公司 Image processing method, device and system
CN109509179A (en) * 2018-10-24 2019-03-22 深圳市旭东数字医学影像技术有限公司 Eyeball and lenticular automatic division method and system based on medical image
CN109509179B (en) * 2018-10-24 2023-03-28 深圳市旭东数字医学影像技术有限公司 Automatic segmentation method and system for eyeballs and crystalline lenses based on medical images
CN110060348A (en) * 2019-04-26 2019-07-26 北京迈格威科技有限公司 Facial image shaping methods and device
CN110060348B (en) * 2019-04-26 2023-08-11 北京迈格威科技有限公司 Face image shaping method and device
CN110675413B (en) * 2019-09-27 2020-11-13 腾讯科技(深圳)有限公司 Three-dimensional face model construction method and device, computer equipment and storage medium
CN110675413A (en) * 2019-09-27 2020-01-10 腾讯科技(深圳)有限公司 Three-dimensional face model construction method and device, computer equipment and storage medium
CN110941332A (en) * 2019-11-06 2020-03-31 北京百度网讯科技有限公司 Expression driving method and device, electronic equipment and storage medium
CN111179210A (en) * 2019-12-27 2020-05-19 浙江工业大学之江学院 Method and system for generating texture map of face and electronic equipment
CN111081375A (en) * 2019-12-27 2020-04-28 北京深测科技有限公司 Early warning method and system for health monitoring
CN111179210B (en) * 2019-12-27 2023-10-20 浙江工业大学之江学院 Face texture map generation method and system and electronic equipment
CN111081375B (en) * 2019-12-27 2023-04-18 北京深测科技有限公司 Early warning method and system for health monitoring
CN112308962A (en) * 2020-11-05 2021-02-02 山东产研信息与人工智能融合研究院有限公司 Real scene model construction method and device with entity target as minimum unit
CN112308962B (en) * 2020-11-05 2023-10-17 山东产研信息与人工智能融合研究院有限公司 Live-action model construction method and device taking entity target as minimum unit
CN112950818A (en) * 2021-03-23 2021-06-11 德施曼机电(中国)有限公司 Intelligent lock management system and method for mechanical key projection recognition
CN113011393A (en) * 2021-04-25 2021-06-22 中国民用航空飞行学院 Human eye positioning method based on improved hybrid projection function
CN113450460A (en) * 2021-07-22 2021-09-28 四川川大智胜软件股份有限公司 Phase-expansion-free three-dimensional face reconstruction method and system based on face shape space distribution
CN114974042B (en) * 2022-06-21 2023-06-23 北京神州泰业科技发展有限公司 Method and system for enabling projection to project onto object surface to enhance reality effect
CN114974042A (en) * 2022-06-21 2022-08-30 北京神州泰业科技发展有限公司 Method and system for projecting projection onto object surface to enhance reality effect
CN116051782A (en) * 2022-11-30 2023-05-02 武汉船舶通信研究所(中国船舶重工集团公司第七二二研究所) Data processing and reconstruction modeling method, device and storage medium based on orthogonal grid curve interpolation
CN116051782B (en) * 2022-11-30 2024-03-22 港珠澳大桥管理局 Data processing and reconstruction modeling method, device and storage medium based on orthogonal grid curve interpolation

Similar Documents

Publication Publication Date Title
CN101339669A (en) Three-dimensional human face modelling approach based on front side image
CN100559398C (en) Automatic deepness image registration method
Rutishauser et al. Merging range images of arbitrarily shaped objects
Wang et al. A hole-filling strategy for reconstruction of smooth surfaces in range images
CN104574432B (en) Three-dimensional face reconstruction method and three-dimensional face reconstruction system for automatic multi-view-angle face auto-shooting image
US8259101B2 (en) Sketch-based design system, apparatus, and method for the construction and modification of three-dimensional geometry
CN101303772A (en) Method for modeling non-linear three-dimensional human face based on single sheet image
CN110751730B (en) Dressing human body shape estimation method based on deep neural network
CN103826032B (en) Depth map post-processing method
CN107730587B (en) Rapid three-dimensional interactive modeling method based on pictures
CN102279981B (en) Three-dimensional image gridding method
CN105303616A (en) Embossment modeling method based on single photograph
Zhang et al. Real-time bas-relief generation from a 3D mesh
CN108765317A (en) A kind of combined optimization method that space-time consistency is stablized with eigencenter EMD adaptive videos
CN110827408B (en) Real-time three-dimensional reconstruction method based on depth sensor
CN109034131A (en) A kind of semi-automatic face key point mask method and storage medium
CN109766866B (en) Face characteristic point real-time detection method and detection system based on three-dimensional reconstruction
CN112132876B (en) Initial pose estimation method in 2D-3D image registration
CN111581776A (en) Geometric reconstruction model-based isogeometric analysis method
CN108010002A (en) A kind of structuring point cloud denoising method based on adaptive implicit Moving Least Squares
CN103295240A (en) Method for evaluating similarity of free-form surfaces
Prasad et al. Fast and Controllable 3D Modelling From Silhouettes.
Wu et al. Photogrammetric reconstruction of free-form objects with curvilinear structures
Mohan et al. Construction of 3D models from single view images: A survey based on various approaches
CN107341476A (en) A kind of unsupervised manikin construction method based on system-computed principle

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Open date: 20090107