CN105719326A - Realistic face generating method based on single photo - Google Patents

Realistic face generating method based on single photo Download PDF

Info

Publication number
CN105719326A
CN105719326A CN201610035432.5A CN201610035432A CN105719326A CN 105719326 A CN105719326 A CN 105719326A CN 201610035432 A CN201610035432 A CN 201610035432A CN 105719326 A CN105719326 A CN 105719326A
Authority
CN
China
Prior art keywords
face
model
texture
alpha
photo
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610035432.5A
Other languages
Chinese (zh)
Inventor
谈国新
孙传明
张文元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong Normal University
Original Assignee
Huazhong Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong Normal University filed Critical Huazhong Normal University
Priority to CN201610035432.5A priority Critical patent/CN105719326A/en
Publication of CN105719326A publication Critical patent/CN105719326A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • G06V40/173Classification, e.g. identification face re-identification, e.g. recognising unknown faces across different face tracks

Abstract

The invention discloses a realistic face generating method based on a single photo. The method comprises steps of firstly constructing a standard face model base, interactively selecting face feature points of an input picture and based on face profile features, matching an optimal model; secondly, via the triangle deformation and the bilinearity interpolation method, realizing texture mapping from the picture to a three-dimensional model, and by introducing an Alpha graph, realizing fusion transition from face covering textures to model neutral textures; and at last, by adopting the grid adjusting method, hierarchically adjusting the model from the whole to the details so as to achieve an objective of a realistic three-dimensional face. The method is user-friendly and requires quite few feature points; and just with one full face photo and quite few feature points, a user can participate in an autonomously finished realistic face establishing method.

Description

A kind of Realistic Human face generating method based on single photo
Technical field
The present invention relates to a kind of three-dimensional face rapid generation, specifically, relate to a kind of Realistic Human face generating method based on single photo.
Background technology
Face is the pith of emotional expression and identification.In life, we identify identity by face, express happiness, anger, grief and joy by face, and face plays very important effect in our life at ordinary times.Along with the development of computer graphics, the human face rebuilding for special pattern starts to be widely used in a lot of fields.The interest strengthening user is had positive effect with participating in initiative by the three-dimensional role of one realistic face of structure.At game animation, film advertisement, Three Dimensional Campus, the fields such as digital travelling scenic spot, virtual dressing room are as one of means improving user's sensory experience, sense of reality degree is better, manufacturing cost is cheap in research, and generation process faceform's generation technique quickly and easily has great importance.
Existing face rapid generation mainly has by equipment acquisition face three-dimensional datas such as spatial digitizers, and the model fidelity of generation is high, but relies on substantial amounts of cloud data, and technology requires higher, expensive;In addition with using multiple fronts and side image to realize faceform's reconstruction by visual correlation is theoretical with algorithm, the method needs are manual demarcates more characteristic point, and initial conditions is complex.So the problem such as conventional face's model generating method exists apparatus expensive, realizes process complexity, real-time is not enough, the demand of real-time, interactive in virtual scene can not be fully met.
Summary of the invention
It is an object of the invention to the defect overcoming above-mentioned technology to exist, a kind of Realistic Human face generating method based on single photo is provided, the method that this technology uses is to user friendly, required characteristic point is less, only need a full face and less characteristic point, can be participated in, by user, the Realistic Face creation method that independently completes.User is incorporated into the establishment process of face by the method, a small amount of characteristic point is chosen by interactive mode, the faceform that coupling is suitable, realize from covering the texture mapping to neutral texture, and the colour of skin of facial boundary merges, finally by the model adjustment of entirety to level of detail time, generate the faceform with personalization.
Its concrete technical scheme is:
A kind of Realistic Human face generating method based on single photo, comprises the following steps:
Step1. characteristic point is demarcated: obtains front two-dimension human face photo, 7 parts such as eyes, nose, mouth is chosen 13 characteristic points and carries out labelling, it is determined that the approximate location of whole face face;
Step2. Model Matching: compare the characteristic point distance between photo and model, the model that matching similarity is maximum in model library;
Step3. texture maps: the texture mapping simulation of facial true colors according to face, realize the mapping covering texture to neutral texture, neutral texture is the texture of model acquiescence, covering texture is human face photo, by the mapping of 13 characteristic points and peripheral region thereof in two layer texture, add the textural characteristics of human face photo to Matching Model;
Step4. the colour of skin merges: uses one Alpha figure, and the boudary portion in face fusion is formed from covering texture seamlessly transitting to neutrality texture, meanwhile, utilizes Alpha figure that the details such as eyebrow, bridge of the nose position is carried out trickle adjustment;
Step5. model regulates: by the method for vector differentials, the characteristics such as face size, position such as the face of Matching Model, eyes, shape of face are adjusted, and generates the personalized model meeting its facial characteristics.
Preferably, Step5 constructs a Model Regulator that can be used for adjusting human face five-sense-organ slight change, generates personalized face true to nature;
If the summit V in face gridi=(x, y, z)TRegard Vector Groups H as0=(V1, V2, V3..., Vn), n represents the quantity on summit, and (as changed size) after eyes are converted, the position of some points changes, and forms Vector Groups H1=(V1', V2', V3' ... Vn'), this conversion is increased a weights W, then, from H0To H1Middle all of transition state HxCan be transferred through linear interpolation to calculate;
Hx=H0+W*(H1-H0)(1)
A kind of conversion is difficult to meet the demand of faceform, it is necessary to the feature according to face, selects multiple conversion, gives a weighted value W for each conversion, then the mixed formulation of m conversion is:
H x = H 0 + Σ i = 1 m W i * ( H i - H 1 ) - - - ( 2 )
Wherein the quantity of m is as the criterion with the quantity needed in practical application, in an experiment, define the grid actuator of 23 kinds of different conversion, including: the covering of eye-level, eye distance, eyes size, eyebrow height, the eyebrow degree of depth, nose size, the nose degree of depth, nose height, the bridge of the nose degree of depth, nose bridge widths, nostril width, face height, face thickness, face width, lip, the cheekbone degree of depth, cheekbone width, cheekbone height, buccal width, chin height, the chin degree of depth, chin width, lower jaw width.Actuator realizes distortion of the mesh by control weight.
Preferably, in Step2, the method for Model Matching is as follows: first, and photo is normalized, and calculates the distance between principal character point and proportionate relationship;Secondly, according to characteristic point distance and proportionate relationship, it is determined that the rough classification of shape of face;Finally, Euclidean distance between Matching Model and photo individual features point in the model library of corresponding shape of face, adopt formula (3) to search out closest threedimensional model;
E = Σ i = 1 n ( D i - D i , ) 2 * λ i - - - ( 3 )
Wherein, DiFor distance between certain characteristic point in photo, Di' for distance between individual features point in threedimensional model, λiFor the weighted value of each characteristic point distance, determine according to its percentage contribution size and empirical equation.N is the distance quantity compared.
Preferably, Step3 assuming, the characteristic point position of neutral texture and covering texture labelling respectively forms V respectively1, V2Two two-dimentional point sets, due to the difference of manual mark, V1, V2In corresponding point necessarily will not be exactly accurate at same position, it is necessary to a kind of mapping method makes its natural fusion, takes P1∈V1, P2∈V2.By P1, P2Expand to three-dimensional vector, P1=(X1, Y1, 1), P2=(X2, Y2, 1), and assume there is matrix M, then can obtain:
M = P 1 * P 2 - 1 - - - ( 4 )
After obtaining the transformation matrix of each point, the conversion of characteristic point is applied in the pixel around characteristic point, this flow process uses common triangle mode of texturing that covering texture is applied to model texture, characteristic point is connected, form the grid covering face, grid now has concurrently triangle and tetragon, by splitting, polygon is all reduced to triangle, is in the pixel in the middle of triangle and is subject to the impact on Atria summit simultaneously;
Three summits assuming triangle are P1, P2, P3, using they three row as matrix, form 3*3 matrix (P1, P2, P3), it is assumed that the triangle after conversion is (P1', P2', P3'), then:
M*(P1, P2, P3)=(P1', P2', P3’)(5)
M=(P1', P2', P3’)*(P1, P2, P3)-1(6)
Certain i point in the middle of diabolo, uses M*PiJust can obtain Pi, determine the coordinate after mapping a little in triangle with this, for unmapped point, coordinate figure is all integer, but the coordinate after mapping is likely to not be integer, and causing cannot with the pixel one_to_one corresponding on figure.
Preferably, in actual fused process in Step4, model neutrality texture, Alpha figure, covering texture will be processed into the same size, and the pixel in Alpha figure participates in mixing as the interpolation weights of mixing, ultimately forming the texture of natural fusion, the mixed formulation of three figure is:
C (x, y)=Cbase(x, y) * (255-CAlpha(x, y))+Coverlay(x, y) * CAlpha(x, y) wherein 0≤CAlpha(x, y)≤255 (7)
Cbase(x, y), CAlpha(x, y), Coverlay(x y) represents neutral texture, Alpha figure respectively, covers texture in (x, y) color value at coordinate place.
Compared with prior art, the invention have the benefit that
The method that the present invention uses is to user friendly, and required characteristic point is less, only needs a full face and less characteristic point, can be participated in, by user, the Realistic Face creation method that independently completes.User is incorporated into the establishment process of face by the method, a small amount of characteristic point is chosen by interactive mode, the faceform that coupling is suitable, realize from covering the texture mapping to neutral texture, and the colour of skin of facial boundary merges, finally by the model adjustment of entirety to level of detail time, generate the faceform with personalization.
Accompanying drawing explanation
Fig. 1 is that Realistic Face builds flow process;
Fig. 2 is characteristic point position;
Fig. 3 is that characteristic point connects;
Fig. 4 is Alpha figure.
Detailed description of the invention
For the technological means making the present invention realize, creation characteristic, reach purpose and effect and be easy to understand, the present invention is expanded on further below in conjunction with accompanying drawing and instantiation.
First, building standardized Face Image Database, interactive mode chooses the face feature point of input photo, and based on shape of face feature, mates best model;Secondly, realize photo by the method for triangle deformation and bilinear interpolation and map to the texture of threedimensional model, and introduce Alpha figure and realize the fusion transition to model neutrality texture of the face covering texture;Finally, the adjustment model that employing grid control method is secondary from entirety to level of detail is to reach to generate the purpose of sense of reality three-dimensional face.
1, interactive generation method step
Sense of reality three-dimensional face interactive generation method first has to, for Asia face feature, build Face Image Database;Then two dimension full face is mated with faceform, utilize texture to map and colour of skin blending algorithm, by the fine adjustments of Model Regulator, generate sense of reality threedimensional model.
1.1 model libraries build
More existing standardized three-dimensional face model storehouses, such as UND, BU-3DFE, BJUT etc. now.These model libraries are widely used in fields such as face generation and recognitions of face.Race, age, or sex, illumination etc. are respectively provided with different standards by different 3 d model libraries, and the inventive method, mainly for Asia ethnic group, sets up three-dimensional face storehouse targetedly.
Conventional Aisan's front shape of face classification has multiple standards: such as waveforms method, font method, Aisan's method etc..Shape of face for ease of model can be mated preferably with photo, model library according to the obvious degree of feature and conventional degree shape of face is divided into ellipse, del, elongated, square, circular 5 kinds.Generating with reference to the image studies tissue ICG of the Linkoping,Sweden university Nature face wire-frame model CANDIDE-3 developed of face neutral model[8], its model simply and is gratuitously opened to the outside world use.But this model is only applicable to face, and this method, with reference to CANDIDE-3 Nature face model, has been especially tailored the head neutral model of applicable Chinese's shape of face feature.After model library construction completes, it is necessary to model is carried out pretreatment, including repairing leak, binding neutral skin tone texture, normalization size, feature point for calibration etc..
1.2 generation steps
Three-dimensional face mainly embodies its sense of reality from the facial characteristics of model and two aspects of texture, and user participates in the appointment of texture and the adjustment of model.Concrete grammar flow process is as it is shown in figure 1, be essentially sub-divided into following 5 steps:
Step1. characteristic point is demarcated: obtains front two-dimension human face photo, 7 parts such as eyes, nose, mouth is chosen 13 characteristic points and carries out labelling, it is determined that the approximate location of whole face face.
Step2. Model Matching: compare the characteristic point distance between photo and model, the model that matching similarity is maximum in model library.
Step3. texture maps: the texture mapping simulation of facial true colors according to face, it is achieved cover the texture mapping to neutral texture.Neutral texture is the texture of model acquiescence, and covering texture is human face photo.By the mapping of 13 characteristic points and peripheral region thereof in two layer texture, add the textural characteristics of target face to Matching Model.
Step4. the colour of skin merges: use an Alpha figure, and the boudary portion in face fusion is formed from covering texture seamlessly transitting to neutral texture.Meanwhile, utilize Alpha figure that the details such as eyebrow, bridge of the nose position is carried out trickle adjustment.
Step5. model regulates: by the method for vector differentials, the characteristics such as face size, position such as the face of Matching Model, eyes, shape of face are adjusted, and generates the personalized model meeting its facial characteristics.
1.3 Model Regulator designs
The effect of Model Regulator is that the layout to human face five-sense-organ carries out local directed complete set, to reach to be more nearly the purpose of real human face.Conventional 3 d modeling software, such as Maya, 3DSMax etc., is provided which the function of blendshapes (or claiming morph), for being converted between two or more states by model, forms different face layouts or human face expression.The present invention uses this technological thought, by constructing a Model Regulator that can be used for adjusting human face five-sense-organ slight change, generates personalized face true to nature.
If the summit V in face gridi=(x, y, z)TRegard Vector Groups H as0=(V1, V2, V3..., Vn), n represents the quantity on summit, and (as changed size) after eyes are converted, the position of some points changes, and forms Vector Groups H1=(V1', V2', V3' ... Vn'), this conversion is increased a weights W.So, from H0To H1Middle all of transition state HxCan be transferred through linear interpolation to calculate.
Hx=H0+W*(H1-H0)(1)
A kind of conversion is difficult to meet the demand of faceform, it is necessary to the feature according to face, selects multiple conversion, gives a weighted value W for each conversion, then the mixed formulation of m conversion is:
H x = H 0 + Σ i = 1 m W i * ( H i - H 1 ) - - - ( 2 )
Wherein the quantity of m is as the criterion with the quantity needed in practical application.In an experiment, define the grid actuator of 23 kinds of different conversion, including: the covering of eye-level, eye distance, eyes size, eyebrow height, the eyebrow degree of depth, nose size, the nose degree of depth, nose height, the bridge of the nose degree of depth, nose bridge widths, nostril width, face height, face thickness, face width, lip, the cheekbone degree of depth, cheekbone width, cheekbone height, buccal width, chin height, the chin degree of depth, chin width, lower jaw width.Actuator realizes distortion of the mesh by control weight.
2, Realistic Face generates key technology
The key technology that Realistic Face generates mainly has the matching technique of two-dimension human face photo and threedimensional model, mapping technology and colour of skin integration technology.
2.1 characteristic points are demarcated and coupling
After obtaining two-dimension human face photo, it is necessary to the key position in photo is carried out characteristic point demarcation, to mate best faceform.Mpeg 4 standard defines 84 characteristic points on Nature face.With reference to mpeg 4 standard, need for faceform's coupling and texture mapping, for reducing the complexity of manual intervention, it is proposed on input photo and faceform, choose the characteristic point labelling as coupling and mapping of 13 key positions, as shown in Figure 2.Achievement in research according to three-dimensional face identification, the mutual relation between these key feature points can substantially determine position and the shape of face of human face five-sense-organ, thus matching optimal model.At experimental section by comparing with the matching process of different characteristic point quantity, demonstrate the effectiveness of 13 characteristic points.
The method of Model Matching is as follows: first, and photo is normalized, and calculates the distance between principal character point and proportionate relationship;Secondly, according to characteristic point distance and proportionate relationship, it is determined that the rough classification of shape of face;Finally, Euclidean distance between Matching Model and photo individual features point in the model library of corresponding shape of face, adopt formula (3) to search out closest threedimensional model.
E = Σ i = 1 n ( D i - D i , ) 2 * λ i - - - ( 3 )
Wherein, DiFor distance between certain characteristic point in photo, Di' for distance between individual features point in threedimensional model, λiFor the weighted value of each characteristic point distance, determine according to its percentage contribution size and empirical equation.N is the distance quantity compared.
2.2 textures map
The outer research worker of Present Domestic proposes multiple texture mapping method.Model after parametrization and corresponding texture according to the corresponding relation of characteristic point, are carried out triangulation, set up corresponding mapping relations by Kraevoy et al., and this algorithm needs to be based upon on the basis of a large amount of iteration, and complexity is higher.The present invention uses for reference the method for Kraevoy, and the methods such as triangle deformation, bilinearity difference that introduce realize covering texture and map to the characteristic point between neutral texture.
Assume that the characteristic point position of neutral texture and covering texture labelling respectively forms V respectively1, V2Two two-dimentional point sets.Due to the difference of manual mark, V1, V2In corresponding point necessarily will not be exactly accurate at same position, it is necessary to a kind of mapping method makes its natural fusion.Take P1∈V1, P2∈V2.By P1, P2Expand to three-dimensional vector, P1=(X1, Y1, 1), P2=(X2, Y2, 1), and assume there is matrix M, then can obtain:
M = P 1 * P 2 - 1 - - - ( 4 )
After obtaining the transformation matrix of each point, the conversion of characteristic point is applied in the pixel around characteristic point.This flow process uses common triangle mode of texturing that covering texture is applied to model texture.As it is shown on figure 3, characteristic point is connected, form the grid covering face, grid now has concurrently triangle and tetragon, by splitting, polygon is all reduced to triangle.It is in the pixel in the middle of triangle and is subject to the impact on Atria summit simultaneously.
Three summits assuming triangle are P1, P2, P3.Using they three row as matrix, form 3*3 matrix (P1, P2, P3).Assume that the triangle after conversion is (P1', P2', P3'), then:
M*(P1, P2, P3)=(P1', P2', P3’)(5)
M=(P1', P2', P3’)*(P1, P2, P3)-1(6)
Certain i point in the middle of diabolo, uses M*PiJust can obtain Pi, determine the coordinate after mapping a little in triangle with this.For unmapped point, coordinate figure is all integer, but the coordinate after mapping is likely to not be integer, and causing cannot with the pixel one_to_one corresponding on figure.Now, utilize the bilinear interpolation method of sampling that the color on figure is sampled, good effect can be obtained.
2.3 colours of skin merge
After texture maps, due to the color distortion between neutral texture and covering texture, it is necessary to certain blending algorithm eliminates visual difference, makes the effect that the colour of skin merges more natural.In existing image processing software and projection splicing fusion software, conventional AlphaTransition method realizes seamlessly transitting of superimposed images, not only merges speed fast, and effect is better.Using this thought, the present invention is distributed according to faceform's feature and texture maps, makes the Alpha figure of standard, eliminates border sudden change by arranging the transition region at edge.The effect adopting Alpha figure has two: first, produces the Texture Boundaries seamlessly transitted;Secondly, to other position details of face, such as eyebrow, bridge of the nose part has been also carried out gradual change adjustment, to reduce the impact covering texture centering texture.Fig. 4 is 10 Alpha figure of Freehandhand-drawing, and in figure, the color of white portion takes from the color covering texture, and black part takes from the color of neutral texture.Part in the middle of white and black will be that power carries out linear interpolation and obtains by the pixel on Alpha figure, by coverage test, is numbered the Alpha figure of 10 and can realize the syncretizing effect of optimum.
In actual fused process, model neutrality texture, Alpha figure, covering texture will be processed into the same size.Pixel in Alpha figure participates in mixing as the interpolation weights of mixing, ultimately forms the texture of natural fusion.The mixed formulation of three figure is:
C (x, y)=Cbase(x, y) * (255-CAlpha(x, y))+Coverlay(x, y) * CAlpha(x, y) wherein 0≤CAlpha(x, y)≤255 (7)
Cbase(x, y), CAlpha(x, y), Coverlay(x y) represents neutral texture, Alpha figure respectively, covers texture in (x, y) color value at coordinate place.
The above, be only best mode for carrying out the invention, and any those familiar with the art is in the technical scope of present disclosure, and the simple change of the technical scheme that can become apparent to or equivalence are replaced and each fallen within protection scope of the present invention.

Claims (5)

1. the Realistic Human face generating method based on single photo, it is characterised in that comprise the following steps:
Step1. characteristic point is demarcated: obtains front two-dimension human face photo, 7 parts is chosen 13 characteristic points and carries out labelling, it is determined that the approximate location of whole face face;
Step2. Model Matching: compare the characteristic point distance between photo and model, the model that matching similarity is maximum in model library;
Step3. texture maps: the texture mapping simulation of facial true colors according to face, realize the mapping covering texture to neutral texture, neutral texture is the texture of model acquiescence, covering texture is human face photo, by the mapping of 13 characteristic points and peripheral region thereof in two layer texture, add the textural characteristics of target face to Matching Model;
Step4. the colour of skin merges: using an Alpha figure, the boudary portion in face fusion is formed from covering texture seamlessly transitting to neutral texture, meanwhile, utilizes Alpha figure that the details position of eyebrow, the bridge of the nose is carried out trickle adjustment;
Step5. model regulates: by the method for vector differentials, the characteristic of the face size of Matching Model, position is adjusted, and generates the personalized model meeting its facial characteristics.
2. the Realistic Human face generating method based on single photo according to claim 1, it is characterised in that construct a Model Regulator for adjusting human face five-sense-organ slight change in Step5, generate personalized face true to nature;
If the summit V in face gridi=(x, y, z)TRegard Vector Groups H as0=(V1, V2, V3..., Vn), n represents the quantity on summit, and after eyes are converted, the position of some points changes, and forms Vector Groups H1=(V1', V2', V3' ... Vn'), this conversion is increased a weights W, then, from H0To H1Middle all of transition state HxCan be transferred through linear interpolation to calculate;
Hx=H0+W*(H1-H0)(1)
A kind of conversion is difficult to meet the demand of faceform, it is necessary to the feature according to face, selects multiple conversion, gives a weighted value W for each conversion, then the mixed formulation of m conversion is:
H x = H 0 + Σ i = 1 m W i * ( H i - H 1 ) - - - ( 2 )
Wherein the quantity of m is as the criterion with the quantity needed in practical application, in an experiment, define the grid actuator of 23 kinds of different conversion, including: the covering of eye-level, eye distance, eyes size, eyebrow height, the eyebrow degree of depth, nose size, the nose degree of depth, nose height, the bridge of the nose degree of depth, nose bridge widths, nostril width, face height, face thickness, face width, lip, the cheekbone degree of depth, cheekbone width, cheekbone height, buccal width, chin height, the chin degree of depth, chin width, lower jaw width, actuator realizes distortion of the mesh by control weight.
3. the Realistic Human face generating method based on single photo according to claim 1, it is characterised in that in Step2, the method for Model Matching is as follows: first, and photo is normalized, and calculates the distance between principal character point and proportionate relationship;Secondly, according to characteristic point distance and proportionate relationship, it is determined that the rough classification of shape of face;Finally, Euclidean distance between Matching Model and photo individual features point in the model library of corresponding shape of face, adopt formula (3) to search out closest threedimensional model;
E = Σ i = 1 n ( D i - D i , ) 2 * λ i - - - ( 3 )
Wherein, DiFor distance between certain characteristic point in photo, Di' for distance between individual features point in threedimensional model, λiFor the weighted value of each characteristic point distance, determining according to its percentage contribution size and empirical equation, n is the distance quantity compared.
4. the Realistic Human face generating method based on single photo according to claim 1, it is characterised in that assume in Step3 that the characteristic point position of neutral texture and covering texture labelling respectively forms V respectively1, V2Two two-dimentional point sets, due to the difference of manual mark, V1, V2In corresponding point necessarily will not be exactly accurate at same position, it is necessary to a kind of mapping method makes its natural fusion, takes P1∈V1, P2∈V2, by P1, P2Expand to three-dimensional vector, P1=(X1, Y1, 1), P2=(X2, Y2, 1), and assume there is matrix M, then:
M=P1*P2 -1(4)
After obtaining the transformation matrix of each point, the conversion of characteristic point is applied in the pixel around characteristic point, this flow process uses common triangle mode of texturing that covering texture is applied to model texture, characteristic point is connected, form the grid covering face, grid now has concurrently triangle and tetragon, by splitting, polygon is all reduced to triangle, is in the pixel in the middle of triangle and is subject to the impact on Atria summit simultaneously;
Three summits assuming triangle are P1, P2, P3, using they three row as matrix, form 3*3 matrix (P1, P2, P3), it is assumed that the triangle after conversion is (P1', P2', P3'), then:
M*(P1, P2, P3)=(P1', P2', P3’)(5)
M=(P1', P2', P3’)*(P1, P2, P3)-1(6)
Certain i point in the middle of diabolo, uses M*PiJust obtain Pi, determine the coordinate after mapping a little in triangle with this, for unmapped point, coordinate figure is all integer, but the coordinate after mapping is likely to not be integer, and causing cannot with the pixel one_to_one corresponding on figure.
5. the Realistic Human face generating method based on single photo according to claim 1, it is characterized in that, in actual fused process in Step4, model neutrality texture, Alpha figure, covering texture will be processed into the same size, pixel in Alpha figure participates in mixing as the interpolation weights of mixing, ultimately forming the texture of natural fusion, the mixed formulation of three figure is:
C (x, y)=Cbase(x, y) * (255-CAlpha(x, y))+Coverlay(x, y) * CAlpha(x, y) wherein 0≤CAlpha(x, y)≤255 (7)
Cbase(x, y), CAlpha(x, y), Coverlay(x y) represents neutral texture, Alpha figure respectively, covers texture in (x, y) color value at coordinate place.
CN201610035432.5A 2016-01-19 2016-01-19 Realistic face generating method based on single photo Pending CN105719326A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610035432.5A CN105719326A (en) 2016-01-19 2016-01-19 Realistic face generating method based on single photo

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610035432.5A CN105719326A (en) 2016-01-19 2016-01-19 Realistic face generating method based on single photo

Publications (1)

Publication Number Publication Date
CN105719326A true CN105719326A (en) 2016-06-29

Family

ID=56147507

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610035432.5A Pending CN105719326A (en) 2016-01-19 2016-01-19 Realistic face generating method based on single photo

Country Status (1)

Country Link
CN (1) CN105719326A (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106372333A (en) * 2016-08-31 2017-02-01 北京维盛视通科技有限公司 Method and device for displaying clothes based on face model
CN106407886A (en) * 2016-08-25 2017-02-15 广州御银科技股份有限公司 Apparatus for establishing face model
CN106652025A (en) * 2016-12-20 2017-05-10 五邑大学 Three-dimensional face modeling method and three-dimensional face modeling printing device based on video streaming and face multi-attribute matching
CN106780714A (en) * 2016-11-10 2017-05-31 深圳市咖啡帮餐饮顾问有限公司 The generation method and system of face 3D models
CN106920277A (en) * 2017-03-01 2017-07-04 浙江神造科技有限公司 Simulation beauty and shaping effect visualizes the method and system of online scope of freedom carving
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A kind of three-dimensional automatic Beijing Opera facial mask making-up method
CN107316340A (en) * 2017-06-28 2017-11-03 河海大学常州校区 A kind of fast human face model building based on single photo
CN107506714A (en) * 2017-08-16 2017-12-22 成都品果科技有限公司 A kind of method of face image relighting
CN107507263A (en) * 2017-07-14 2017-12-22 西安电子科技大学 A kind of Texture Generating Approach and system based on image
CN107578469A (en) * 2017-09-08 2018-01-12 明利 A kind of 3D human body modeling methods and device based on single photo
CN107680158A (en) * 2017-11-01 2018-02-09 长沙学院 A kind of three-dimensional facial reconstruction method based on convolutional neural networks model
CN107705248A (en) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107705355A (en) * 2017-09-08 2018-02-16 郭睿 A kind of 3D human body modeling methods and device based on plurality of pictures
CN108053219A (en) * 2017-12-29 2018-05-18 浙江万里学院 A kind of safe Intelligent logistics reimbursement of expense method
CN108154550A (en) * 2017-11-29 2018-06-12 深圳奥比中光科技有限公司 Face real-time three-dimensional method for reconstructing based on RGBD cameras
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN108876886A (en) * 2017-05-09 2018-11-23 腾讯科技(深圳)有限公司 Image processing method, device and computer equipment
CN109325990A (en) * 2017-07-27 2019-02-12 腾讯科技(深圳)有限公司 Image processing method and image processing apparatus, storage medium
CN110148082A (en) * 2019-04-02 2019-08-20 杭州趣维科技有限公司 A kind of mobile terminal facial image face real-time deformation adjusting method
WO2019201027A1 (en) * 2018-04-18 2019-10-24 腾讯科技(深圳)有限公司 Face model processing method and device, nonvolatile computer-readable storage medium and electronic device
CN110458924A (en) * 2019-07-23 2019-11-15 腾讯科技(深圳)有限公司 A kind of three-dimensional facial model method for building up, device and electronic equipment
CN110738732A (en) * 2019-10-24 2020-01-31 重庆灵翎互娱科技有限公司 three-dimensional face model generation method and equipment
CN110858411A (en) * 2018-08-22 2020-03-03 金运数字技术(武汉)有限公司 Method for generating 3D head model based on single face picture
CN111382618A (en) * 2018-12-28 2020-07-07 广州市百果园信息技术有限公司 Illumination detection method, device, equipment and storage medium for face image
CN111640056A (en) * 2020-05-22 2020-09-08 构范(厦门)信息技术有限公司 Model adaptive deformation method and system
CN111738087A (en) * 2020-05-25 2020-10-02 完美世界(北京)软件科技发展有限公司 Method and device for generating face model of game role
CN113160412A (en) * 2021-04-23 2021-07-23 福建天晴在线互动科技有限公司 Automatic software model generation method and system based on texture mapping
CN113240810A (en) * 2021-04-28 2021-08-10 深圳羽迹科技有限公司 Face model fusion method, system and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826217A (en) * 2010-05-07 2010-09-08 上海交通大学 Rapid generation method for facial animation
CN103366400A (en) * 2013-07-24 2013-10-23 深圳市华创振新科技发展有限公司 Method for automatically generating three-dimensional head portrait

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101826217A (en) * 2010-05-07 2010-09-08 上海交通大学 Rapid generation method for facial animation
CN103366400A (en) * 2013-07-24 2013-10-23 深圳市华创振新科技发展有限公司 Method for automatically generating three-dimensional head portrait

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
谈国新 等: "一种真实感三维人脸交互式生成方法", 《武汉大学学报.信息科学版》 *

Cited By (44)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407886A (en) * 2016-08-25 2017-02-15 广州御银科技股份有限公司 Apparatus for establishing face model
CN106372333A (en) * 2016-08-31 2017-02-01 北京维盛视通科技有限公司 Method and device for displaying clothes based on face model
CN106780714A (en) * 2016-11-10 2017-05-31 深圳市咖啡帮餐饮顾问有限公司 The generation method and system of face 3D models
CN106652025A (en) * 2016-12-20 2017-05-10 五邑大学 Three-dimensional face modeling method and three-dimensional face modeling printing device based on video streaming and face multi-attribute matching
CN106652025B (en) * 2016-12-20 2019-10-01 五邑大学 A kind of three-dimensional face modeling method and printing equipment based on video flowing Yu face multi-attribute Matching
CN108305312A (en) * 2017-01-23 2018-07-20 腾讯科技(深圳)有限公司 The generation method and device of 3D virtual images
CN106920277A (en) * 2017-03-01 2017-07-04 浙江神造科技有限公司 Simulation beauty and shaping effect visualizes the method and system of online scope of freedom carving
CN106952221A (en) * 2017-03-15 2017-07-14 中山大学 A kind of three-dimensional automatic Beijing Opera facial mask making-up method
CN106952221B (en) * 2017-03-15 2019-12-31 中山大学 Three-dimensional Beijing opera facial makeup automatic making-up method
CN108876886B (en) * 2017-05-09 2021-07-27 腾讯科技(深圳)有限公司 Image processing method and device and computer equipment
CN108876886A (en) * 2017-05-09 2018-11-23 腾讯科技(深圳)有限公司 Image processing method, device and computer equipment
CN107316340A (en) * 2017-06-28 2017-11-03 河海大学常州校区 A kind of fast human face model building based on single photo
CN107316340B (en) * 2017-06-28 2020-06-19 河海大学常州校区 Rapid face modeling method based on single photo
CN107507263B (en) * 2017-07-14 2020-11-24 西安电子科技大学 Texture generation method and system based on image
CN107507263A (en) * 2017-07-14 2017-12-22 西安电子科技大学 A kind of Texture Generating Approach and system based on image
CN109325990A (en) * 2017-07-27 2019-02-12 腾讯科技(深圳)有限公司 Image processing method and image processing apparatus, storage medium
CN109325990B (en) * 2017-07-27 2022-11-29 腾讯科技(深圳)有限公司 Image processing method, image processing apparatus, and storage medium
CN107506714A (en) * 2017-08-16 2017-12-22 成都品果科技有限公司 A kind of method of face image relighting
CN107705355A (en) * 2017-09-08 2018-02-16 郭睿 A kind of 3D human body modeling methods and device based on plurality of pictures
CN107578469A (en) * 2017-09-08 2018-01-12 明利 A kind of 3D human body modeling methods and device based on single photo
CN107705248A (en) * 2017-10-31 2018-02-16 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107680158A (en) * 2017-11-01 2018-02-09 长沙学院 A kind of three-dimensional facial reconstruction method based on convolutional neural networks model
CN108154550A (en) * 2017-11-29 2018-06-12 深圳奥比中光科技有限公司 Face real-time three-dimensional method for reconstructing based on RGBD cameras
CN108154550B (en) * 2017-11-29 2021-07-06 奥比中光科技集团股份有限公司 RGBD camera-based real-time three-dimensional face reconstruction method
CN108053219A (en) * 2017-12-29 2018-05-18 浙江万里学院 A kind of safe Intelligent logistics reimbursement of expense method
US11257299B2 (en) 2018-04-18 2022-02-22 Tencent Technology (Shenzhen) Company Limited Face model processing for facial expression method and apparatus, non-volatile computer-readable storage-medium, and electronic device
WO2019201027A1 (en) * 2018-04-18 2019-10-24 腾讯科技(深圳)有限公司 Face model processing method and device, nonvolatile computer-readable storage medium and electronic device
CN108776983A (en) * 2018-05-31 2018-11-09 北京市商汤科技开发有限公司 Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network
CN110858411A (en) * 2018-08-22 2020-03-03 金运数字技术(武汉)有限公司 Method for generating 3D head model based on single face picture
CN111382618A (en) * 2018-12-28 2020-07-07 广州市百果园信息技术有限公司 Illumination detection method, device, equipment and storage medium for face image
CN111382618B (en) * 2018-12-28 2021-02-05 广州市百果园信息技术有限公司 Illumination detection method, device, equipment and storage medium for face image
US11908236B2 (en) 2018-12-28 2024-02-20 Bigo Technology Pte. Ltd. Illumination detection method and apparatus for face image, and device and storage medium
CN110148082B (en) * 2019-04-02 2023-03-28 杭州小影创新科技股份有限公司 Mobile terminal face image face real-time deformation adjusting method
CN110148082A (en) * 2019-04-02 2019-08-20 杭州趣维科技有限公司 A kind of mobile terminal facial image face real-time deformation adjusting method
CN110458924A (en) * 2019-07-23 2019-11-15 腾讯科技(深圳)有限公司 A kind of three-dimensional facial model method for building up, device and electronic equipment
CN110738732A (en) * 2019-10-24 2020-01-31 重庆灵翎互娱科技有限公司 three-dimensional face model generation method and equipment
CN110738732B (en) * 2019-10-24 2024-04-05 重庆灵翎互娱科技有限公司 Three-dimensional face model generation method and equipment
CN111640056A (en) * 2020-05-22 2020-09-08 构范(厦门)信息技术有限公司 Model adaptive deformation method and system
CN111640056B (en) * 2020-05-22 2023-04-11 构范(厦门)信息技术有限公司 Model adaptive deformation method and system
CN111738087A (en) * 2020-05-25 2020-10-02 完美世界(北京)软件科技发展有限公司 Method and device for generating face model of game role
CN111738087B (en) * 2020-05-25 2023-07-25 完美世界(北京)软件科技发展有限公司 Method and device for generating face model of game character
CN113160412A (en) * 2021-04-23 2021-07-23 福建天晴在线互动科技有限公司 Automatic software model generation method and system based on texture mapping
CN113160412B (en) * 2021-04-23 2023-06-30 福建天晴在线互动科技有限公司 Automatic software model generation method and system based on texture mapping
CN113240810A (en) * 2021-04-28 2021-08-10 深圳羽迹科技有限公司 Face model fusion method, system and equipment

Similar Documents

Publication Publication Date Title
CN105719326A (en) Realistic face generating method based on single photo
CN105844706B (en) A kind of full-automatic three-dimensional scalp electroacupuncture method based on single image
CN101324961B (en) Human face portion three-dimensional picture pasting method in computer virtual world
CN104463938A (en) Three-dimensional virtual make-up trial method and device
CN103530907B (en) Complicated three-dimensional model drawing method based on images
CN105913416A (en) Method for automatically segmenting three-dimensional human face model area
CN103489224B (en) A kind of interactive three-dimensional point cloud color edit methods
CN102419868A (en) Device and method for modeling 3D (three-dimensional) hair based on 3D hair template
CN104103090A (en) Image processing method, customized human body display method and image processing system
CN102360513B (en) Object illumination moving method based on gradient operation
CN104239855B (en) Image style transfer synthesis method based on stroke synthesis
CN103606190A (en) Method for automatically converting single face front photo into three-dimensional (3D) face model
CN101968892A (en) Method for automatically adjusting three-dimensional face model according to one face picture
CN109377557A (en) Real-time three-dimensional facial reconstruction method based on single frames facial image
CN102074040A (en) Image processing apparatus, image processing method, and program
CN101887366A (en) Digital simulation and synthesis technology with artistic style of Yunnan heavy-color painting
CN105787974A (en) Establishment method for establishing bionic human facial aging model
CN103854306A (en) High-reality dynamic expression modeling method
Wang Landscape design of coastal area based on virtual reality technology and intelligent algorithm
CN109389682A (en) A kind of three-dimensional face model automatic adjusting method
CN101968891A (en) System for automatically generating three-dimensional figure of picture for game
CN108805090A (en) A kind of virtual examination cosmetic method based on Plane Gridding Model
CN104091366B (en) Three-dimensional intelligent digitalization generation method and system based on two-dimensional shadow information
CN104103091A (en) 3D intelligent modeling method and system and a flexible manufacturing system of 3D model
CN110163961A (en) A method of described based on landforms and generates three-dimensional virtual world

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160629

RJ01 Rejection of invention patent application after publication